Documente Academic
Documente Profesional
Documente Cultură
13
Co-organizers:
APPLICATIONS OF
INTELLIGENT
SOFTWARE SYSTEMS
IN POWER
PROCESS
STRUCTUI
ENGINEER
PROCEEDINGS
Editors:
A.S. Jovanovic,
A.C. Lucia
MPA
ISPRA
1997
STUTTGART
EUR 17669 EN
APPLICATIONS
OF
INTELLIGENT
SOFTWARE
SYSTEMS IN
POWER PLANT,
PROCESS PLANT
AND
STRUCTURAL
ENGINEERING
*&::5*1
Proceedings
LEGAL NOTICE
Neither the European Commission nor any person
acting on behalf of the Commission is responsible for the use which
might be made of the following information
EUR 17669 EN
ECSC-EC-EAEC Brussels Luxembourg, 1997
Printed in Italy
Supporting organizations:
CEC - JRC, Ispra, Italy
ELETROPAULO, Brazil
EPRI, Palo Alto, USA
MPA, Stuttgart, Germany
Preface:
In 1990 the organizers of the SMiRT11 in Tokyo (SMiRT Structural Mechanics in Reactor
Technology) proposed to organize a new post conference seminar on EXPERT SYSTEMS AND A I
APPLICATIONS POWER GENERATION INDUSTRY in Hakone. By doing this, they had
reacted to the obviously growing interest in application of all kinds of "knowledgebased" software
tools in the areas relevant for SMiRT and for the power plant and structural engineering in general.
Starting from the positive experience of this seminar, the following one was organized in the
framework of SMiRT12 Conference in Constance, Germany, in August 1993. The proceedings
presented here, belong to the third seminar of the kind, organized in 1995 in So Paulo, Brazil,
within SMiRT13.
When compared to the first seminar the number of papers and participants significantly increased, as
well as the generic interest for the overall issue. The level (the number and quality of papers)
achieved at the second seminar (in Constance) has been approximately maintained. The trend
obvious in Constance is present also in So Paulo: most of the papers are linked nowadays to
practical problems, not just "general" or "in principle" solutions. On the other hand, there is an
obvious trend to encompass the areas outside the main domains of SMiRT, i.e. a trend to tackle not
only the problems relevant solely to nuclear power plants, but also those from e.g. fossilfired power
plants and/or process plants. This has been reflected also in the title of the seminar. It has been
slightly changed for the third seminar, being now APPLICATIONS OF INTELLIGENT
SOFTWARE SYSTEMS LN POWER PLANT, PROCESS PLANT AND STRUCTURAL
ENGINEERING.
The change of title mentioned above reflects also the shift from the "conventional" (rulebased)
expert systems and/or knowledgebased systems (KBSs) to the systems developed nowadays, which
all tend to be more or less integrated with other tools, and therefore, probably better described by the
term Intelligent Software Systems.
The seminar and the proceedings have been structured in a sort of a "progressive" way: both start
first with the tutoriallike lectures giving introduction and describing the stateoftheart in some of
the two important emerging enabling technologies: industrial scale fuzzy systems and data mining
("extracting knowledge from data"). They continue then by presenting the contributions giving an
idea and/or illustration of "what is going on" in the area of intelligent software systems, covering
Western and Eastern Europe, South America, USA and Japan. This review is followed by a series of
papers presenting single systems and/or projects and their results. In the final part of the seminar and
of proceedings, the endusers have been asked to express their opinion on the usability and
usefulness of KBSs, as well as about the problems they have been facing.
This 1995 Post Conference SMiRT Seminar and its coorganizers have received support and help
from various institutions, companies and persons. On behalf of all the coorganizers the editors want
to gratefully acknowledge this support here, especially the help provided by the host of this seminar,
the electric utility company ELETROPAULO from So Paulo, Brazil. Our special thanks go also to
all the contributing authors: without their research, and their willingness to prepare and present these
papers, the seminar would never have taken place. The same thanks go to the members of the end
users' panel: its their precious advice and opinion that must guide the work of researchers in the area
of intelligent software systems. Special thanks go to Dr. Poloni, for his precious help in preparation
of the overall seminar, and to Mr. Jos Anson for his marvelous mastering of the local organization
of the seminar.
The editors
Stuttgart, Ispra, May 1996
Table of Contents
29
31
S. Fukuda
Intelligent NDI data base for pressure vessels
45
R. D. Townsend
Advances in Damage Assessment and Life Management
of Elevated Temperature Plant - an ERA Perspective
(extended abstract)
63
67
81
93
M. Gruden
Technology awareness dissemination in Eastern Europe
with intelligent computer systems for remaining power plant
life assessment EU project TINCA
109
117
J. M. Brear, R. D. Townsend
Modern remanent life assessment methods:
degradation, damage, crack growth
119
A. S. Jovanovic, M. Friemann
Intelligent software systems for remaining life assessment
- The SP249 project
155
P. Auerkari
Theoretical and practical basis of advanced inspection planning involving
both engineering and non-engineering factors
173
183
203
227
229
245
M. Poloni, R. Weber
Advanced analysis of material properties using DataEngine
259
J. A. B. Montevechi, P. E. Miyagi
Fuzzy logic - an application for group technology
271
293
H. R. Kautz
Consequences of current failures for quality assurance
301
309
319
H. R. Kautz
SP249 End-Users response/acceptance of KBS's:
What is required, what is available, what has to be done
339
Alphabetical
355
list of authors
CHAPTER 1
1. Introduction
In recent years, expert systems technology has been extensively developed [1, 2, 3] to be
applicable to a number of support systems addressing tasks such as integrity analysis and
residual life assessment of critical components. These experiences have demonstrated that the
development of an expert system can become rather complex especially when the exploited
knowledge consists mainly of a large collection of cases, where each relates to a specific
problem and its own identified solution.
In similar situations, several applications recently delivered in different domains (diagnosis,
engineering design, risk analysis, manufacturing quality control), have been based on an
alternative approach to expert systems, namely case-based reasoning (CBR) [6]. The main
feature of this approach is the capability of solving problems through a direct comparison with
available cases, which formally represent similar problems connected with their known
solutions. Advantages with respect to a conventional expert system are: the knowledge
acquisition process is greatly simplified (the knowledge-base is structured as an encoded set of
already solved problems); the system is more robust because it succeeds in adapting an
available case to propose a solution for a given problem; in addition, the explanation
supporting the proposed solution is quite expressive as it contains references to analogous
cases.
At the same time, some approaches coming from new fields like fuzzy logic (FL) [21], neural
networks (NN) [7] and machine learning (ML) [8] have shown their effectiveness in the
management of uncertainty and the possibility of interpolation of system behaviour from
sample data for complex problems. This has been demonstrated from a number of applications
both at academic and industrial level (for more details see e.g. [5, 9, 10]).
A good engineering approach should be capable of making use of all the available information
effectively. For the cited problems, a portion of information comes from human experts.
Usually, the expert information is not precise and is represented by fuzzy terms.
In addition to the expert information, another important portion of information is numerical
information, which is collected from various sensors, instruments or obtained according to
physical models. In power and process plants up to now mainly case studies relative to
particular situations (e.g. failures) were available and usually in paper form, e.g. reports.
Increasingly these industries are automating this activity of data logging, creating huge
databases of plant operation data. In case of failures detailed case studies can be created,
documenting the state of the plant before and after the situation of interest.
As a consequence of the avaikbility of large databases, extraction or learning o "models" or
"relations" from archive data is a very important topic in engineering. Knowledge extraction
from databases is performed by means of the so-called "Data Mining" [13]. With this name is
characterised the use of machine learning techniques when the environment is described
through a database.
For these purposes a number of different techniques are used, among them: neural networks
(NN), fuzzy and statistical data analysis (DA), data mine systems like ID3, AQ15 or CN2,
case-based reasoning (CBR). Each of these techniques has its own benefits and drawbacks.
The use of the right technique depends from a number of factors, namely the type of analysis
to be performed, the type of data available, the use that will be done of the results. Normally
an expert in the field of data mining can suggest an effective solution only together with a
domain expert. Moreover, one of the main obstacles in applying data mining to databases, is
the size of the database. In fact the size of the database has consequences for the cost of
validation of the induced models, and for the size of the search space. With the growing
dimensions of the current databases (orders of magnitude of several megabytes are normal
practice) serious problems could be encountered.
A solution for the first of these problems is the application of database optimisation
techniques. Instead of using the entire database, only a subset will be used for the initial search
phase. During the search process this set will be incrementally extended with data from the
database, using (e.g.) incremental browsing optimisation techniques.
Solutions for the second of the above cited problems, that is, the size of the search space,
should include the use of effective search strategies and heuristics. A very valuable source of
heuristic information is the end-user, normally an expert in the application domain. An
important part of the work has then to be focused on designing user interaction during the
search process, and understandable representations for the knowledge. Furthermore, the use
and integration of domain knowledge, provided by the user, may allow the discovery of
relationships that would remain otherwise hidden.
These drawbacks bring the need of an intelligent approach. Such an approach should not only
provide the way to compare and validate the results coming from the techniques available, but
also give advice on how to appropriately use these techniques. At the same time, it should help
the field expert to exploit his/her domain knowledge. In this way the analysis can profit from
all the advantages of using a data mining system without being affected from the cited
drawbacks.
The idea illustrated in this paper and outlined in Figure 1 is a sort of "Computer Assisted Data
Mining, that makes use of methods typical both for "classic" KBSs and for numerical
processing to maximise the performances of the data mining system.
Intelligent extraction
(NN/Fuzzy)
The object field is mainly the operation support in power and process plants. Such support is
in terms of (e.g.) metallic materials properties characterisation, inspection scheduling, damage
assessment.
In the following a general introduction to some basic concepts of Data Mining is given
together with a number of practical examples.
The current state of development of different systems at MPA Stuttgart is then summarised.
Humans
R = Pair-Relation data
Design Data | Test data
Feature analysis
Classifier design
Identification
Pre-processing
Classification
Extraction
Estimation
2-D display
Prediction
Assessment
Control
Cluster analysis
Exploration
Validity
2-D display
This kind of methods can be either hard (crisp) or fuzzy (based on fuzzy set theory),
depending on whether each feature vector characterising an object belongs exclusively to one
cluster or to all clusters to different degrees. In other words, classical (crisp) clustering
algorithms generate partitions such that each object is assigned to exactly one cluster. Often,
however, objects cannot adequately be assigned to strictly one cluster (because they are
located between clusters). In these cases fuzzy clustering methods provide a much more
adequate tool for representing real-data structures, where non-stochastic uncertainties of
different type can be present.
A crucial point for successful analysis is the selection of the right set of features (variables).
This choice should be representative for the physical process that generated the data to enable
us to construct realistic clusters.
Let us assume that the important problem of feature extraction has been solved. Our task is
then to divide n objects e X characterised by indicators (variables of different types) into
c, 2 < c ^ categorically homogeneous subsets called clusters. The objects belonging to any
one of the clusters should be similar and the objects of different clusters as dissimilar as
possible. The number of clusters, c, is normally not known in advance. Before applying any
clustering procedure is very important to select the mathematical properties of the data set (for
example distance, connectivity, intensity) and in which way should they be used in order to
identify clusters. Unfortunately these questions have to be answered for each different data set
since there is no universally optimal cluster criteria. The cluster criterion adopted can influence
heavily the results, leading to wrong interpretations when care is not taken in its choice.
2.4 Classification
A classifier is a device, means, or algorithm by which the data space is partitioned into C
decision regions. Classification attempts to discover associations between subclasses of a
population. In many cases the activity of classification is complementary to that of clustering.
One time the clustering of the data set has been performed and the clusters detected it is
possible to use these clusters to classify new data pairs mcoming. On the basis of the criterion
used to cluster the data the similarity of the new item is evaluated, providing an indication to
which region (cluster) of system behaviour it belongs.
2.5 Comparison with conventional analysis
To build a bridge between the usual data analysis techniques used in the material properties
characterisation (namely regression analysis) and pattern recognition ones a simplified
comparison is given in Figure 3.
When mixed data are available it is possible fit switching regression models. Simultaneous
estimates for the parameters of c regression models, together with a fuzzy c-partitioning of the
data is feasible.
Using regression analysis generally a global model in the data set interval definition is obtained
with its confidence intervals (Le. 95%).
Using pattern recognition techniques (e.g. cluster analysis) local models can be reached. These
models may possibly better approximate the material behaviour. Moreover the obtained
models can be used to automatically "classify" new data items. While a regression model can
be calculated in the new "point", the classification reports the tipicality of the new item in front
of the data-based model previously built.
Fuzzy clusters can also give rise to "locar regression models, this is in fact the essence of the
idea introduced originally in [11, 12]. The overall model is then structured into a series of'ifthen" statements. The conditional part of the statement includes linguistic labels of the input
10
variables while the action part contains a linear (or more generally non linear) numerical
relationship between input and output variables, the clustering method applies to the formation
of the conditional part.
Neural networks
theory
Fuzzy set theory
h
Y
Data Analysis
Pattern recognition
Classification
Regression analysis
Local models
Global model
Assessment
Figure 3: Comparison between the use of pattern recognition and regression analysis
11
0.8
0.2
50 60
Distance (Km)
An example is reported in Figure 4, where two fuzzy sets characterising the variable distance
are shown. While it is clear that a distance bigger than 60 kilometers is definitively BIG (at
least in the sense of the definition of distance in this particular case), and a distance of 25 Km
is definitively MEDIUM, what about of something in between?
A gradual membership in the two sets is given. For example a 50 Km distance will have a 0.8
membership in the BIG set and 0.2 in the MEDIUM set.
All the set theory has been "enlarged" to accomodate this extended definition, that is, set
operations like conjunction, disjunction and so on. For a detailed exposition on fuzzy set
theory see [25].
The question could be now: where to use fuzzy pattern recognition? the main lines can be
summarised as follows:
=> Insufficent information for implementing classical methods
=> Expert's uncertainty about the exact membership of the object
=> Inherent characteristics of objects which could be conveniently presented in
terms of fuzzy sets
=> Opportunity to include and process expert opinion in decision making and to
handle partial inconsistency
The replacement of statistical entities with fuzzy ones should be done very carefully, bearing in
mind the incompleteness of the analogy between them. Moreover, fuzzy theory is supposed to
be a useful mathematical description of non-statistical uncertainty. Therefore, it seems more
reasonable to invest efforts in a new statement of the problem which could handle fuzzy
information rather than to constrict the fuzzy problem into probabilistic frameworks.
Clustering methods can be either hard (crisp) or fuzzy, depending on whether each feature
vector (vector of the co-ordinates in the data space) characterising an object belongs
exclusively to one cluster or to all clusters to different degrees. The best-know algorithm from
which derive many variants, is the fuzzy c-means algorithm (FCM).
Table 1 summarises the different steps performed in a typical clustering session. The number
of clusters C is derived from the domain knowledge on the data or is a test value that can be
changed on the basis of the results obtained, m is an exponent that will have value 1 if we want
to perform a crisp clustering, and a growing value as we want a more fuzzy characterisation of
the clusters. The C-partition is a matrix where the membership values to each cluster of all
data items are stored. The initialisation of this matrix can be done randomly or using, if
available, a priori information.
12
REPEAT
Update the parameters of each cluster prototype
Update the partition matrix V
UNTIL | A U | < ;
A matrix norm to evaluate the distance of each data item from the clusters has to be fixed.
This choice gives raise to different characterisations of the clustering algorithm. If use is made
of an Euclidean distance (like in the FCM algorithm), the cluster prototypes will be points
(also known as cluster centres), and, as a consequence, the algorithm will search for spherical
clusters. Different kinds of prototypes and distances bring different shapes for the searched
clusters: hyperellipsoidal, lines, planes, hyp er spherical shells. The iterative process proceeds
until the fuzzy partition does not significantly modify further (the norm of the difference of the
last two iterations has a value lower than a pre-determined threshold).
A tipical example where the application of a fuzzy clustering algorithm is of use is that of the
"butterfly" data set. In Figure 5 the data set is shown. The two triangular regions have a
common point in x$.
X3
X2
Xi
* Xu
X6
X5
X4
X-
X12
X11
X10
Xs
X9
Xl4
Xl3
Clustering these points by a crisp objective- function dgorithm might yeald the picture shown
in Figure 6, in which " 1 " indicate membership to the left-hand cluster and "0" membership to
the right-hand cluster. It is easy to observe that, even though the butterfly is simmetrie, the
clusters in Figure 6 are not because the point xg, the point "between" the clusters has to be
(fully) assigned to either cluster 1 or cluster 2. Applying a fuzzy clustering dgorithrn, a
membership of 0.5 in both clusters will result, which seems more appropriate.
3.1.1 Possibilistic clustering of hardness measurement data
Fuzzy clustering algorithms do not always estimate the parameters of the prototypes
accurately. The main source of this problem is the probabilistic constraint used in fuzzy
clustering, which states that the memberships of a data point across all clusters must sum to
one. Into the framework of possibility theory the cited constraint can be relaxed [23].
This approach permits a noise point, far from all the prototypes of the clusters found, to be
given a low membership in every cluster, as well as to characterise in a better way points
belonging, for their characteristics, to more than one cluster. In other words, in the FCM
13
Center of cluster 1
Figure 6
Center of cluster 2
Figure 7
Due to this fact, the membership values cannot distinguish between a moderately atypical
member and an extremely atypical member because the membership of a point in a class is a
relative number. Therefore noise points, which are often quite distant from the primary
clusters, can drastically influence the estimate of class prototypes, and hence the final partition.
Hardness-based temperature estimation
A set of experimental data has been extracted from [24] regarding the determination of
hardness properties for two different ferritic steels, namely 2%Cr1 Mo and 1 Cr1/4Mo.
In the following, hardness will be indicated with H, while the Sherby-Dom parameter will be
C
indicated with P. The expression of the Sherby-Dorn parameter is P = logt
, where t is the
time in hours and T is the temperature in Kelvin. The two derived expressions will be used in
the paper:
T=
(1)
logt-P
t = l(T TJ
(2)
Hardness measures are used to estimate the temperature, and by means of temperature the
remain mg lifetime.
The material under consideration is a 1 Cr!4Mo steel with the fqhowing composition:
C
Si
S
Ni Cr Mo
0.069 0.27 0.01 0.014 0.58 0.05 0.75 0.45
V
0.05
W
< 0.05
As reported in [24] this material does not exhibit a standard behaviour. Problems have been
encountered in determining the material C constant, ultimately assumed to have a value of
10270 in the normalised condition. The initial hardness is equal to 115 and, although it is clear
from looking at Figure 8, that there are a significant number of measurements above this
threshold. This kind of behaviour is detected for temperatures above 625C and for short
exposure times. The remaining data points show a progressive softening of the material with
time, although there exists a region where the values maintain a stable level (up to values of
about -8.5) and than start to decrease relatively rapidly. Such a behaviour requires a different
characterisation of the hardness-dependent variable.
14
The application of a chistering method provides a way to find out the possible structure of
these regions from the experimental data. In this case the possibilistic clustering approach has
been used. Two clusters has been assumed and the algorithm suggested in [22] employed. The
initialisation of the procedure has been performed using the FCM algorithm. The result is
shown in Figure 9.
SctlBiKAlol
1 acdB (icy.3io)
= 41907464.62076 3.41237
duster 1
duster 2
Oo 0,
o- "U
iijasrdcss
initial ^
-...o.
110
105
IQ 0
.4 5 9
0 8
41.0
7.5
7.0
The clustering method can effectively detect two regions and a clear threshold between them
An example of approximation is given using two second order regression models. As a test
comparison, in Figure 8 a of the data is reported together with the 95% confidence
approximation region. It is easy to see how the Gaussian fit can hardly deal with the hardness
values over the initial one.
3.2 Fuzzy approximation of inspection intervals
In this example the rektionship between damage class, derived from metallographic replicas,
and expired life as input and remaining life as output is approximated by means a rulebased
system. The rules are automatically built from experimental data.
3.2.1 Adaptive fuzzy systems to build a rule-based classifier
In many data processing problems (control, signal processing, data analysis) the information
concerning design evaluation, realisation, etc., can be classified into two main types:
numerical information (e.g. sensor measurements) and linguistic information (experts
opinion). If both kind of resources are to be used, there is need to integrate them in a
common framework, to provide the way of evaluating the performances of the obtained
system In [26] a general approach to solve this problem is proposed. The solution is the
realisation of a fuzzy rule base where both types ofinformation are integrated.
The procedure consists in a five step algorithm and it is exploited realising the actions
described in TABLE 2. It is possible to prove that the generated fuzzy system is a universal
approximator from a compact set Q cz Rn to R, Le., it can approximate any real continuous
function defined on Q to any accuracy. Thus, the adaptive fuzzy system estimates fuzzy rules
from sample data; this reduces to patch or cluster estimation in the data space.
In the case of relationships difficult to model due to the lack of analytical insight in the
mechanisms of system behaviour, or to its high nonlinearity, such an approximation can be
of great advantage. It realises a graph cover with local averaging. The "fuzziness" or
multivalence of sets comes into play when output sets overlap. A fuzzy system is unique in
that it can ties vague definition to the mathematics of curves. In this way it ties natural
language and expert rules to statespace geometry.
15
Step 2
Step 3
Step 4
Step 5
16
The work illustrated in this section try to formulate a generic relation approximated by means
of a fuzzy system
Analysis method. To find out a relation between life spent, damage rating and remaining life
an adaptive fuzzy system had been used as previously described.
The rule-based system has been designed to accommodate a qualitative input in terms of
damage class and a crisp input as life expended, the output consists in a prediction of the
expired life fraction of the component.
Membership functions have been manually tuned on the base of the indications coming from
the experimental data and material behaviour.
3.2.3 Results
A software module has been programmed to implement and test the described analysis
method. This module, after the necessary tuning and validation will became one of the tools
belonging to a knowledge based system on material properties under development at MPA
Stuttgart.
TABLE 3 - Description of damage classes (from 271)
Neubauer classification of
damage state
Undamaged
Isolated
No action
Oriented
re-inspection iy2-3 years
Microcracked
Repair or replacement
6 months
Macrocracked
Immediate repair
In TABLE 4 the input data and the results are reported for a subset of cases, together with a
different estimate proposed in the original report [27]. The results are limited to the
1 Cr1/4Mo steeL, because for the other material too few experimental data were reported. It is
possible to see that the proposed approach gives good predictions and moreover is always
conservative.
In Figure 11 are reported the fuzzy sets describing the life expired and the damage class.
While the reliabity of such an approximation depends clearly on the quality of the input data,
as for every assessment method based on experimental results, the fuzzy representation of the
variables provides a tolerance against the possible uncertainties, e.g. the damage class
determination.
17
3.2.4 Remarks
The preliminary results obtained evaluating the performances of the fuzzy models elaborated
are encouraging, but in order to bring them to really support industrial applications, further
investigations are needed to assess the conservativeness and the generality of the procedures,
as well as the evaluation of different approaches based on the same theoretical background
(e.g. classification algorithms).
TABLE 4 -
Best Estimate
time
expired
(hours)
time remaining
(hours)
2008
6621
5588
5980
4017
4612
4720
4454
6195
2434
3755
2349
7712
918
2377
56
8629
308
Damage
Class
(See [27])
real value
On the other side the most relevant practical limitations of the results presented regards the
data. Data necessary for the type of analysis presented were available mainly for 1 CrVMo
steel, and the authors are not aware of similar sets of data available for other materials (e.g.
12%Cr-steel). The behaviour of these materials is different and, hence, the euristic values
extracted would be different. Further research in this direction is thus necessary.
Reinspecti on Interval (years)
Undamaged
30 '
""
20
10
^
<
"
,' -'
.t'i---"l
10
- ~ ~ ~ ~ ~
---~'
I
20
EPRI
" Neubauer
30
40
Service life Expended (years)
18
those whose parameters are timeinvariant, i.e., whose weights are fixed initially and no
eventual updating occurs.
2.D
25
30
15
4S
50
55
SO
For the purposes of this paper a network of the first kind will be considered. These
networks can be trained by examples (as is often required in real life) and sometimes generalise
well for unknown test cases. The worthiness of a network hes in its inferencing or
generalisation capabilities over such test sets:
"Connectionist learning procedures are suitable in domains with several graded features
that collectively contribute to the solution of a problem. In the process of learning, a network
may discover important underlying regularities in the task domain " [15].
The multilayer perception (MLP) consists of multiple layers of simple two state, sigmoid
processing elements (nodes) or nuerons that interact using weighted connections (see Figure
12). After a lowermost input layer there are usually any number of intermediate, or bidden,
layers followed by an output layer at the top. There exist no interconnections within a layer
while all neurons in a layer are fully connected to neurons in adjacent layers. Weights measure
the degree of correlation between the activity levels of neurons that they connect.
An external input vector is supplied to the network by clamping it at the nodes in the input
layer. For conventional classification problems, during training, the appropriate output node is
clamped to state 1 while the others are clamped to state 0. This is the desired output supplied
by the teacher.
The training procedure has to determine the internal parameters of the hidden units based on
its knowledge of the inputs and desired outputs. Hence training consists of searching a very
large parameter space and therefore is usually rather slow.
Multilayer perceptron using backpropagation of error
During training, each pattern of the training set is used in succession to clamp the input and
output layers of the network. A sequence of forward and backward passes constitutes a cycle
and such a cycle through the entire training set is termed a sweep. After a number of sweeps
through the training data, the error may be minimised. At this stage the network is supposed to
19
have discovered (learned) the relationship between the input and output vectors in the training
samples.
In the testing phase the neural net is expected to be able to utilise the information encoded in
its connection weights to assign the correct output labels for the test vectors that are now
clamped only at the input layer. It should be noted that the optimal number of hidden layersand
the number of units in each of such layers are mostly empirical in nature. The number of units
in layer H corresponds to the number of output classes.
MLP models using backpropagation have been applied in the exclusive OR problem and in
recognising familiar shapes in novel positions, discovering semantic features, recognising
written text, recognising speech and identification of sonar targets. A more detailed
description with some application examples can be found in [15].
The hierarchical (multilayer) networks with a supervised learning algorithm, among the variety
of neural networks architecture, have been applied to various engineering problems [1920].
Attractive features of the networks in the industrial applications can be summarised as follows:
a) It is possible the automatic construction of nonlinear mappings function from multiple
input data to multiple output data.
b) The trained network attains a capability of "generalisation", i.e. a kind of interpolation, such
that a properly trained network estimates appropriate output data even for input data sets
not belonging to the training patterns.
c) The trained network operates quickly in an application process.
Output
Layer H
Layer h+1
Layer h
Layer 0
Neuron
Input
20
There has been some research on trasforming this knowledge to a format better suited for
human reading, but this mainly concerns single layer networks, that model simple, linear
functions.
Moreover, it is difficult to incorporate any domain knowledge or user interaction in the
learning process. Hence NN perform best in areas where no additional information is available,
which is generally not the case with data mining.
3.3.1 An Example: Prediction of creep-induced failure
In this section analyses of case studies on structural components failure in power plants using
hierarchical (multilayer) neural networks are described. Using selected test data about case
studies stored in the structural failure database of a knowledge-based system, the network is
trained to predict possible failure mechanisms like creep-, overheating (OH)- or overstressing
(OS)-induced failure. It should be noted here that because of the shortage of available case
studies, an appropriate selection of case studies and input parameters to be used for network
training was required to attain high accuracy. A collection of more case studies will however
resolve such problems, and will improve accuracy of the analyses. An analysis module for case
studies using the neural network has also been developed, and successfully implemented in a
knowledge-based system.
A three layer neural network employing the back propagation algorithm with the momentum
method is trained in such a way that it can predict possible failure mechanisms, inferring
operating conditions, component dimensions, material properties and others.
The attention is focused on the prediction of either creep-, overheating (OH)-, or overstressing
(OS)-induced failure. However, the analysis method presented here is feasible to be applied to
the prediction of the erosion-, corrosion- and fatigue-induced failures if case studies and
related information are available. Overheating-induced failure is defined as "failure caused by
higher temperature beyond calculated operating temperature due to different reasons", while
the overstressing-induced failure is defined as failure caused by higher stress beyond
calculated stress due to different reasons". The failure causes were identified through careful
observation of change of material micro-structure in each case study.
3.3.1.1 Selection process of 36 case studies
The network is trained to predict the possible occurrence of creep-induced failure, inferring
operating conditions and some other information. At first, 41 case studies, which contain
complete information except the number of Start-up / Shutdown, are selected out of 72 case
studies. Then 36 case studies are selected and utilised to train the network.
3.3.1.2 Network architecture and input / output data
An ordinary three layer network is employed. Operating conditions and other parameters are
given to the input units of the network, while the Yes / No value regarding occurrence of
creep-induced failure, i.e. 1 or 0, is shown to the network as a teacher signal. Through some
preliminary tests, the network parameters are determined as follows : the learning rate = 0.1,
the momentum factor m = 0.9, the constant of the sigmoid function U0 = 1, the range of initial
weights = -1 to 1, the number of bidden units = 10.
The network training is stopped when the estimation capability for both training and test
patterns reaches almost a steady state. That is, the total number of training iterations roughly
ranges from 5,000 to 10,000. Several combinations of input parameters are examined. Three
typical combinations are shown below:
Combination 1:
21
T/Tc
/0
LogioH
/., / 0
LogioH
Output
Creep /
Not creep
Creep/
Not creep
Nr. of
Iterations
Learning Patterns
Test Patterns25
All cases
Creep
cases
All
cases
Creep
cases
10,000
89%
(32/36)"
100 %
(19/19)
78%
(28/36)
84%
(16/19)
5,000
97%
(35/36)
100 %
(19/19)
86%
(31/36)
95%
(18/19)
5,000
94%
(34/36)
95%
(18/19)
67%
(24/36)
79%
(15/19)
T,P,<U
Material Classes
LogioH
T,P,d,t,
Material Classes
Creep /
Not
creep
The first number in bracket denotes that of successful prediction for creep-induced failure cases, while
the second number does that of total cases.
2)
A capability of generalisation, i.e. a capability of estimation for test patterns, is carefully examined for all
36 cases, repeatedly taking one case as a test pattern.
22
correctly. All the 36 cases are taken as a test pattern in order. In the table, the score for
success is counted when the output is greater than 1 for the correct answer of 1 or when the
output is smaller than 0.4 for the correct answer of 0.
It is clear from the table that the neural network successfully predicts the occurrence of creepinduced failure for all the combinations of input parameters. Among them, the combination (b)
gives the best result, Le. less iterations and higher accuracy. It is also clearly seen that the
combination (c) excluding T/Tc and /0 results in less accurate prediction. The two
parameters seem to play an important role in this prediction.
23
"
i Auemetic AatSfeiafctfl
^
' :
' :
. " '
' :
: ' :
|perStresslnt!MNm/*
IstrengthLeviMN/mm1
perTempFc
True 318:
W hs
572;
itru "1*3820!
_ .
;Tnj!tp5S;
e&otfe
mmnxa
,iLi, u
i *mm^ mm
^ W ( WUU ^PPBWBMM'WWM
: ' ' : ' ' ' : - ' :
^ i ^ ^ . i t t t i i S f t sassi
' '
24
No one can assure that a different technique could not bring a better result. Moreover, the
domain expert, e.g. the plant engineer, had no possibility to perform personally the analysis,
but was only mdirectly active, furnishing some bits of domain knowledge.
To bring this new methods in the industrial practice a not-so-steep learning curve should be
provided directly to the plant engineer. This can be realised realising intelligent systems that
can support an end-user in managing these powerful but not always easy-to-use tools. This
first step in direction of an new advanced system is illustrated in Figure 14.
The end-user will formulate the objectives of his analysis, in terms of (e.g.) relationslups to be
searched. The Advisor, acting as an intelligent interface between the user and the set
methods/database, will check which data are available. On the basis of the data and of the type
of task to be accomplished, the applicable methods will be chosen and their effectiveness in the
particular case evaluated.
ADVISOR
Methods:
Database:
Possibility of
applicati on
Intelligent
queries
Effectiveness
of ose
Analysis of data
avail able
Methods:
m3/CN2/_.
CBR
NN
Extracted Mo del
Database
1
Figure 14: The Advisor
The Advisor KB S will include the rules that are usually applied in choosing a search method,
methods appUcability in terms of their minimal requirements to operate, in terms of known
drawbacks with particular data sets or to realise particular tasks. For example, if assessed
results are not available, it is not possible to apply a supervised learning, but unsupervised
learning (e.g. clustering) techniques should be used.
Currently a research effort is starting to provide some indexes of performances to assess which
approach should be used in the analysis of the data available. Tests on small data samples
could give a first advice in this respect, intelligent querying and incremental browsing
optimisation techniques are going to be integrated in this part of the module.
Interaction with the user is also important in the Advisor behaviour. If, e.g., a clustering
method has to be applied, the feature selection activity is a very critical one. The system, on
the basis of a first search in the database, will propose a set of features. The user can, at this
point, select some of them using his/her domain knowledge.
The use of appropriate tools will enhance the user interface, making it more intuitive and
effective. An example is the presence of intelligent flowcharts, that is, a flowchart which can
interact with the user, asking for input and proposing different (pre-programmed) action paths
(see Figure 15).
The author consider the friendliness of the user interface an important issue for the system to
be accepted in the industrial practice. It is a matter of fact that such an innovative system, if
not supported from ease of use, will not be even considered in a normally conservative
environment like that of power and process industry. Nonetheless there is a growing industrial
interest in such systems, because the possible economic advantages can be quantified in several
millions of dollars each year in some cases.
25
liei
~*
Feature selection
1
'<...----
r
Cluster analysis
Classifier design
. I
Consequently, the endusers will need additional fast and cost effective support tools to
improve the effectiveness of failure prevention and diagnosis, and to allow the storage,
utilisation and effective management of company specific inhouse experience.
This knowledge is now usually stored as large collections of case histories in paper form, and
mainly used for archiving purposes. The particular case histories targeted for the present
project proposal concern those related to components exposed to fatigue and/or creep loading
conditions. For the target components a failure can represent huge direct and/or indirect
losses. For example, a recent steam pipe failure in a German power plant involved over 5
MECU in replacement costs, over 3 MECU in costs for investigation and analysis, and over 10
MECU indirect costs due to loss of production.
Due to the lack of suitable means of managing large quantity of information, the knowledge
has been mainly transferred through failure analysis experts (as personnel training or as item
specific reports), technical articles on the subject in relevant engineering journals, or in
publications on failure cases and analysis. These vehicles of transfer are limited in scope and
content, and are also inefficient for solving immediate specific problems. For example, the
most extensive books on industrial failure analysis, such as the Metals Handbook (Volume on
Failure Analysis and Prevention), only contain references to a few hundred failure cases, with
only sketchy background information. In the absence of an appropriate means to manage the
bulk of failure cases, it is today not possible to utilise, even inhouse, failure cases efficiently
for solving present failure analysis problems.
Through the application of a suitable data mining system, the following economical benefits
can be foreseen (the evaluations are for a medium size European utility with 3,000 to 4,000
MWe ofinstalled thermal capacity):
26
improved failure prevention and life management as well as reduced loss of production in
European power and process plants to save 1% of related cost, or about 5 MECU per year;
a reduction of maintenance costs in extensively automated plants with reduced O&M
personnel to save 1 to 4% of the related cost, or 1 to 4 MECU per year
The benefit to the environment is anticipated through a reduction of the emissions from suboptimally operating plants, unexpected operational deviations and consequent loss in
efficiency, which increase emissions per Mwh.
For example, a typical fossil power plant every year produces for each MWe some 50 tons of
ash and slag, 75 tons of SOx (or desulfiuisation by-products), 10 tons of NOx, up to 30,000 GJ
of waste heat, and about 1000 tons of CO2.
Assuming that 1% of these emissions are avoided by more optimal operation and maintenance,
this amounts to 1,500 tons of ash and slag, 2,250 tons of SOx equivalent, 300 tons of NOx,
900,000 GJ of waste heat, and 30,000 tons of C0 2 avoided per year.
5. Conclusions
The most important result of the research described in the paper is probably the proof that
the applied advanced data mining methods (despite the fact of being based on pure numerics)
are capable to discover" and describe (analytically) the complex qualitative interrelationships
among the material parameters relevant for the life assessment of high temperature power
plant components.
The architecture of a new integrated system for data mining in databases has been outlined. Its
objectives are not general, but mainly related to application in power and process plants for
problems of creep, fatigue and corrosion of metallic materials.
Some parts of the proposed architecture have been already realised, or are in various phases of
design/implementation. The increasing avaikbility of large databases of material properties and
of failure case histories, together with the effectiveness of the preliminary results obtained,
makes the described system very attractive not only as an applied research tool but also as a
significant engineering tool in the industrial environment.
6. References
[1] A. Jovanovic, The SP249 Project and the SP249 Knowledge-Based System as Steps Towards the de
facto Standardisation of Power Plant Component Life Assessment Practice in Europe, 20th MPA-Seminar
October 6-7, 1994, Stuttgart, Germany
[2] A. Jovanovic, Multi-Utility Projects ESR-VGB and ESR-International: Integrated Knowledge-Based
Systems for the Remaining Life and Damage Assessment, Proc. SMirt Post Conference Seminar Nr. 13,
Knowledge-based (Expert) System Applications in Power Plant and Structural Engineering EUR 15408
EN, JRC, pp. 459-464
[3] P. M. Schfer, A. Jovanovic, W. Bogaerts, M. Vancoille, FRACTAL - An Intelligent Software System
for Failure Analysis of Metallic Components Susceptible to Corrosion Related Cracking 20th MPASeminar October 6-7, 1994, Stuttgart, Germany
[4] Holdsworth S.R (1994) BRTTE-EURAM C-FAT Project BE 5245: KBS-aided Prediction of Crack
Initiation and Early Crack Growth Behaviour Under Complex Creep-Fatigue Loading Conditions, In
Knowledge-Based (Expert) System Applications in Power Plant and Structural Engineering Jovanovic,
Lucia, FukudaEds, Joint Research Centre of European Commission, EUR 15408 EN, pp. 235-243
[5] S. M. Psomas , A. Jovanovic, H.P. Ellingsen, V. Moustakis, G. Stavrakakis, J. Brear Application of
machine learning methodologies for extraction of expert knowledge out of the structural failure database,
20. MPA-Seminar - SPRINT/KBS Dissemination Workshops, 6. und 7. October 1994, Stuttgart, Germany
[6] Aamodt A , Plaza E., "Case-Based Reasoning: Fundamental Issues, Methodological Variations, and
System Approaches", AICOM vol.7 Nr. 1, March 1994
[7] Wasserman P.D. (1989) Neural Computing - Theory and Practice, Van Nostrand Reinhold, New York
27
[8] Quinlan J.R. (1988) Programs for Machine Learning, Morgan Kufmann Publishers, San Mateo,
California
[9] S. Yoshimura, S. Psomas, K. Maile, A. Jovanovic, H.P. Ellingsen, Prediction of Possible Failure
Mechanism in Power Plant Components using Neural Networks and Structural Failure Database, 20th
MPA-Seminar October 6-7, 1994, Stuttgart, Germany
[10] Poloni M., Jovanovic ., Maile ., Holdsworth S., Brear J. (1994) Fuzzy analysis of material
properties data: Application to high temperature components in power plants, 20th MPA-Seminar SPRINT/KBS Dissemination Workshops, October 6 and 7, Stuttgart, Germany, pp. 4.2.1-4.4.19
[11] Takagi T., Sugeno M. (1985) Fuzzy Identification of Systems and Its Applications to Modeling and
Control, Trans, on Syst. Man Cybern., Vol. SMC-15, No. 1, pp. 116-132
[12] Sugeno M., Tanaka K. (1991) Successive identification of a fuzzy model and its application to
prediction of a complex system, Int. Journal of Fuzzy Sets and Systems, No. 42, pp. 315-334
[13] M. Holsheimer, AP.J.M. Siebes (1994) Data Mining: the search for knowledge in databases, Centrum
voor Wiskunde en Informatica, Amsterdam, The Netherlands, Report CS-R9406
[14] MIT GmbH (1995) DataEngine User Manual, Third edition, Aachen, Germany
[15] Pal S. K , Sushmita S.(1992) Multilayer perceptron, fuzzy sets, and classification, Transactions
on Neural Networks, No 3, pp.683-697
[16] Bezdek J.C. (1987) Pattern recognition with fuzzy objective function algorithms, Plenum Press, New
York
[17] S. Psomas, G. Stavrakakis, V. Moustakis and A. S. Jovanovic, An expert system for avoiding repeated
structural failures in power plants, Proceedings of 1994 European Simulation Multiconferences, Barcelona,
Spain, June 1994, pp.480-485.
[18] A. S. Jovanovic, KBS-related research programs and software systems developed at MPA Stuttgart,
Germany, Proceedings of SMiRT Post Conference Seminar Nr. 13, Knowledge-based (Expert) System
Applications in Power Plant and Structural Engineering, Constance, Germany, Aug. 1993, EUR 15408
EN, JRC,pp.l75-187.
[19] G. Yagawa, S. Yoshimura, Y. Mochizuki andT. Oishi, Identification of crack shape hidden in solid
by means of neural network and computational mechanics, Proceedings oflUTAM Symposium on Inverse
Problems in Engineering Mechanics, Tokyo, Japan, May 1992, pp.213-222, Springer-Verlag.
[20] M. J. S. Vancoile, H. M. G. Smets and W. F. L. Bogaerts, Intelligent corrosion management systems,
Proceedings of SMiRT Post Conference Seminar Nr. 13, Knowledge-based (Expert) System Applications in
Power Plant and Structural Engineering, Constance, Germany, Aug. 1993, EUR 15408 EN, JRC, pp.93112.
[21] Bezdek J.C, Pal S. Eds (1992) Fuzzy models for Pattern Recognition, Press
[22] Krishnapuram R and Keller J. M. (1994) Fuzzy and Possibilistic Clustering Methods for Computer
Vision, in Neural and Fuzzy Systems, S. Mitra, M. Gupta, and W. Kraske (Eds.), SPIE Intitute Series, Vol.
IS 12, pp 133-159
[23] Dubois D., Prade H. (1988) Possibility Theory: An Approach to Computerized Processing of
Uncertainty, New York: Plenum Press
[24] Carruthers R.B., Day RV. (1968) 77ze Spheroidisation of some Ferritic Superheater Steels, Central
Electricity Generating Board, North Eastern Region, Scientific Services Department, Report
SSD/NE/R138.
[25] Zimmermann H.-J. (1991) Fuzzy Set Theory and Its Applications (2nd Edition), Kluwer Academic
Publishers, Boston, Dordrecht
[26] Wang Li-Xin (1994) Adaptive fuzzy systems and control: design and stability analysis, Prentice-Hall,
Englewood Cliffs, New Jersey
[27] Shammas M.S. (1987) Predicting the remanent life of lCr'^Mo coarse-grained heat affected zone
material by quantitative cavitation measurements, Central Electricity Generating Board, Report
TPRD/L/3199/R87
[28] Neubauer , Wedel U. (1983) Restlife Estimation of Creeping Components by Means of Replicas,
ASME International Conference on Advances in Life Prediction Methods, Albany, NY
[29] EPRI (1990) Field Metallography Research Leads to Improved Re-Examination Interval For Creep
Damaged Steampipes, EPRI First Use Report 197
29
CHAPTER 2
31
1. Introduction
Nuclear structural components such as pressure
vessels and piping are typical examples of huge scale
artifacts, while micromachines whose size ranges IO'6 to
10'3 m are typical examples of tiny scale artifacts. They
have their own missions, and are designed by different
engineers in different engineering fields. However, there
are some common features in their design processes. These
practical structures are in general related to various coupled
physical phenomena. They are required to be evaluated
and designed considering the coupled phenomena. A lot
of trial and error evaluations are indispensable. Such
situations make it very difficult to find a satisfactory or
optimized solution of practical structures, although
numerous optimization algorithms have been studied.
32
Node Generation
Based on Bucketing Method
Element Generation
Based on Delaunay Triangulation
| : Interactive process
33
boundary conditions.
distributions
In the present system, nodes are first generated,
and then a FE mesh is built. In general, it is difficult to
well control element size for a complex geometry. A node
density distribution over a whole geometry model is
constructed as follows.
The system stores several local nodal patterns such
as the pattern suitable to well capture stress concentration,
the pattern to subdivide a finite domain uniformly, and
the pattern to subdivide a whole domain uniformly. A user
selects some of those local nodal patterns, depending on
their analysis purposes, and specifies their relative
importance and where to locate them. The process is
illustrated in Fig. 2. For example, when either the crack
(a)
Crack tip
(b)
^Symmetric line
Crack tip
y ^ Membership function
Membership function
for nodal pattern I
for nodal pattern II
(c)
Pu :ket
(
X! .2
Hole!
s
u c
1
t
^""T Boundary
1
1
1
1
W 1
T\
1
1
1
1
^""Crackjtip
Location
, | j
A: Dominant area of
nodal pattern I
B: Dominant area of
nodal pattern II
Location
Bucket
oooooi
o o o o o o ooo ooo oo
oooooooooooooo
oooooo ooo oO O o o
oooooooooooooo
o o o o o o o o o o o o oo
oooooooooooooo
(e)
Candidate nodes not tested
o Tesled candidate nodes
<cut>
_ <in>
<01lt>
Generated nodes
o
Crack tip
Fig.2 Superposition of nodal patterns based on fuzzy theory
34
2.1.6 FE analyses
The present system automatically converts
geometry models of concern to various FE models,
depending on physical phenomena to be analyzed, i.e.
stress analysis, eigen value analysis, thermal conduction
analysis, electrostatic analysis, and so on. The current
version of the system produces FE models of quadratic
tetrahedral elements, which are compatible to one of
commercial FE codes, MARC [9]. FE analyses are
automatically performed. FE models and analysis results
are visualized using a pre/post processor of MARC,
MENTAT [9].
2D Design
Window
^ Satisfactory solution
O Unsatisfactory solution
35
Anchor to Substrate
Movable Rine
(Rotor)
Spiral Beam
Preparation Phase
Insulation
Physical
Values
Movable Ring
Anchor to
Substrate
Spiral beam
4 4 4
Input data
(Design Parameters)
Electrode
Insulation
'
Substrate
(b) Cross section view
36
200 Urn
2.5 um
206 um
2.5 um
5.0
360 deg.
1.0
Si
190 GPa
Yield stress
7 GPa
Mass density
2300 kg/m3
Permittivity of Insulator
AB = CD
0.3
4.0
fixed
displacement
Fig.7 Boundary conditions for in-plane deformation
analysis of rotor
37
35.0
30.0
'_
2
o 25.0
g" 20.0
o
"
1mm
15.0
>
10.0
22.5
45
67.5
90
Angle of rotation(degree)
Starting torque : 0.42 xlO '^m
38
Some of the earliest techniques found among the approaches derived from probability are based on singlevalued representations. These techniques started from approximate methods, such as the modified Bayesian rule
[26] and confirmation theory [47], and evolved into formal
methods for propagating probability values over Bayesian
Belief Networks [38, 39]. Another trend among the probabilistic approaches is represented by interval-valued representations such as Dempster-Shafer theory [24, 45, 34, 49].
In all these approaches, the basic inferential mechanism
is the conditioning operation.
1.1-4 Fuzzy Logic Based Reasoning Systems
Among the fuzzy logic based approaches, the most notable ones are based on a fuzzy-valued representation of
uncertainty. These include the Possibility Theory and Linguistic Variable Approach [57, 54], and the Triangularnorm based approach [10, 9, 20].
The basic inferential mechanism used in possibilistic
reasoning is the generalized modus-ponens [54], which
makes use of inferential chains (syllogisms).
1.2
Complementarity
In the next section we will briefly describe the development process common to both fuzzy-logic rule based systems and fuzzy controllers. In section three and four we
will describe the technology development for a fuzzy expert system and a fuzzy controller, respectively. In section
five we will discuss a few applications of fuzzy controllers.
Finally, in section six we will discuss some future trends
of this promising technology.
(FES)
For clarity's purpose, our discussion on FES will be anchored on RUM/PRIMO, a Fuzzy Expert System which
was developed by the author in 1987 [20] and further refined in the early nineties [6].
3.1 FES Reasoning tasks
As mentioned in the previous section, the reasoning tasks
required by FES can be divided into three layers: the
knowledge representation, to determine issues such as the
appropriate data structure for the uncertainty information
and meta-information, the input and termset granularity
selection; the inference mechanism, to determine the uncertainty calculi to perform the intersection, detachment,
union, and pooling of the information; and the control
of the inference, to determine the calculi selection, the
conflict measurement and resolution, the ignorance measurement, and the resource allocation.
39
J
Mode I (46.2 kHz)
L
Mode II (101 kHz)
O 2D Analytical
.'
O
*. 2.5
2
- * - 3D FEM
2.0
I .,
1.0
Insulator
0=0
'
Symmetric plane
"-0 = i
Starting
torque rj
.... .... .
100
200
300
400
40
* Interactive
operations
14.964
50.583
50.583 50383
50383
Number of nodes
units in the input layer, ten units in the hidden layer, and
41
Teaching
Data
Output
Layer
2 Units
Hidden
Layer
10 Units
Input
Layer
4 Units
I f
Mean_Error = ^pjt
' OPk\
nt p=i = 1 ' y
(D
where
:
the number of output units
the number of training or test data sets
t :
Pk '
O,
4.2.3 DW search
DWs are searched using the trained neural network.
Fig. 20 shows the DW in the T., G and W space, the
solutions in which satisfy that the starting torque . is larger
than 0.32 10"' Nm. It can be seen in the figure that
satisfactory solutions can be found when T. is small, G is
small, and Wr is larger.
Next, the sizes of the micro wobble actuator to be
0.O6
(30.0,5.0,1.8)
0.05
'5 0.04
Test data
0.03
Training data
(20.0, 2.0,0.2)
0.02
001
40000
80000
120000
160000
200000
42
Thickness of rotor
Thickness of
insulation
The number of searched points
in design window = 18,420
(Driving Voltage : ISO V)
Width of rotor : 20 - 30
Fig. 22 Design windows for 150 V
5. Conclusions
(30.0,5.0,1.8)
(20.0, 2.0,0.2)
Acknowledgements
43
References
[1]
Yoshimura, S., Yagawa, G. and Mochizuki, Y.,
"Automation of Thermal and Structural Design Using AI
Techniques", Engineering Analysis with Boundary
Elements, 7 (1990) 73-77.
[2]
Yoshimura, S., Yagawa, G. and Mochizuki, Y.,
"An Artificial Intelligence Approach to Efficient Fusion
First Wall Design", Lecture Notes in Computer Science
(Computer-Aided Cooperative Product Development),
Japanese).
[ 15] Asano, T., "Practical Use of Bucketing Techniques
[3]
in Print.
[5]
Ueda, H , Uno, M., Ogawa, H., Shimakawa, T.,
Yoshimura, S. and Yagawa, G., "Development of Expert
System for Structural Design of FBR Components",
Journal of the Atomic Energy Society of Japan, 37 (1995)
(in Japanese).
[6]
Rumelhart, D. E., Hinton, G. E. and Williams, G.
E., "Learning Representation by Back-propagation
Errors", Nature, 323 (1986) 533-536.
[7]
Yagawa, G., Yoshimura, S., Soneda, N. and
Nakao, K., "Automatic 2- and 3-D Mesh Generation
Based on Fuzzy Knowledge Processing", Computational
Mechanics, 9 (1992) 333-346.
[8]
Yagawa, G., Yoshimura, S. and Nakao, K.,
"Automatic Mesh Generation of Complex Geometries
Based on Fuzzy Knowledge Processing and
Computational Geometry", Integrated Computer-Aided
Engineering, in Print.
[9]
MARC Analysis Research Corporation, MARC
manual k5.2 (1994)
[10] C h i y o k u r a , H., Solid M o d e l i n g with
DESIGNBASE : Theory and Implementation, AddisonWesley, (1988).
[11] Shibaike, N., "Design of Micro-mechanisms
Focusing on Configuration, Materials and Processes",
International Journal of Materials & Design, in Print.
[12] L. A. Zadeh, L. ., "Fuzzy Algorithms",
Information and Control, 12 (1968) 94-102.
[ 13] Zadeh, L. ., "Outline of a New Approach to the
Analysis of Complex Systems and Decision Process",
45
INTELLIGENT
NDI
DATA
BASE
FOR
A PRESSURE
VESSEL
Shuichi Fukuda
Tokyo Metropolitan Institute of Technology
6-6, Asahigaoka, Hino, Tokyo, 191, JAPAN
.tel:+81-425-83-5111 ext. 3605
fax:+81-425-83-5119
e-mail : fukuda@mgbfu.tmit.ac.jp
1. Introduction
This paper describes the activities of the committee of Nondestructive Evalution
Data Base of the Japan Society of Nondestructive Inspection. This committee was
set up together with several other committees in the Japan Society of
Nondestructive Inspection in order to promote the standardization in NDI
technologies with the financial support from the Ministry of International Trade
and Industry.
This committee developed several prototype systems based upon the survey.
This paper describes the outline of the survey and the developed systems based
upon it.
2. Survey
To clarify what should be done by this committee, we carried out survey work
and we received 62 answers out of 167.
[1] NDI data base necessary?
yes, very much (44%), yes (52%), not so much(5%)
absolutely not (0%)
[2] NDI data based developed? or to be developed?
yes (34%), no (56%),
no answer(10%)
[3] If yes in (2), what kind of data base?
image processing data base for inspection (3), flaw evaluation (3), welding
defects (3), text data base (3), maintenance inspection for a cubic storage
tank (1), Online welding defect evaluation (1), visual inspection^ ),
remaining life evaluation (1), corrosion of pipes (1),
46
47
others (21%)
[16] What NDI techniques do you think is important for NDI data base?
.. UT (41), RT (24), ET (21), MT (20), Pt (15), SM (7), AE (6), others
[17] What conditions do you think is important for NDI data base?
field (18), factory(8),lab(2)
[18] What material? ,
metal, non-metal (13), steel (18), metal (3), new material (4)
[19] What structure?
Welded structure (11), vessels (8), piping (5), heat exchangers (3),
steel raw material (6)
[20] What failures?
cracks (15), corrosion (14), weld defects (5). embrittlement (4)
Design
This is still in the stage of conceptual design. This system aims at sampling data
in real time and storing them in the data base. There is no appreciable difference
in real time processing between NDI data and keyboard inputs. Both can be
processed as several MB/sec discontinous
variable
only
difference is input speed. But discontinuous high speed analog inputs are very
difficult to process on an ordinary computer.
Therefore, we made a conceptual design for such a system using a computer
that possesses real time OS and an A/D conversion function. For an OS, we will
consider to use real time UNIX with 10 microsec - 1 milisec response time. And at
the same time we adopt multi CPU and distribute tasks symmetrically. Further,
data are transferred without CPU intervention to facilitate the processing speed
and to reduce the CPU load, Regrettably enough, we cannot find appropriate
means to retrieve these data in an intelligent manner at the present time, we are
processing them just as simple data files with time stamp tags. Fig.1 shows the
conceptual design of this system.
48
tables and/or photos. These data constitute a piece of knowledge, the data can be
roughly divided into texts and figures (images). But figures, tables or photos are
without expection referred to on the text. Thus, we can utilize a text for retrieving
these pieces of information.
It is well known that a literature in a certain field or a report made up by a certain
person always has a certain style or uses a certain vocabulary. Thus, if we
examine the sentence for its concordance, we will be able to extract a certain
characteristic or a piece of knowledge. These vocabularies, therefore, are always
linked to other specific words or sentences. We can utilize this technique for
classification. The report will turn out to be an object oriented data base without
any trouble after the task of classification is completed, if we let a certain word be
an object, classification be class and category, and link be inheritance and
message passing.
We developed a prototype based on this idea with the sample from MITI code
number 501 "Technical Standard for Structure, etc. of Power Generating Nuclear
Reactor Facilities."
The computers we used are SUN Sparc 2 GX Plus, Macintosh 2Ci and Next
Cube. The Japanese sentences are inputted using a scanner and OCR
MacReader Japan. The sentences are analyzed using Micro OCP (Oxford
Concordance Program). And for object programming, we used Smalltalk - 80, and
bjectiveworks for Smalltalk, and HyperCard and Expanded Tool Kit are also
used.
As Smalltalk does discriminate data structure, we can easily take in image data.
Fig.2 and Fig.3 show samples of screen images.
49
inspect using an arrow. If the location is specified, then the type of a joint there will
appear and the button for material will also appear at right. Thus, even if a user
does not know the name for such type of a welded joint, he easily finds out by
indicating the location by an arrow. This improves the user interface a great deal if
the user is a non-expert, because he or she knows where to inspect, but he or she
does not know the expert's name for that joint. If a user designates materials used
there, then a computer prompts him or her to input thickness data. When all
necessary information is given, then the final screen image will appear as shown
in Fig.7, where the top item shows the inputted type of a structure; in this case, a
pressure vessel, and the second one shows the inputted material; low alloy steel,
and the third, thickness and 4th item, a type of joints, is automatically inputted if a
user indicates the joint he or she wishes to inspect as is already described. The
5th item, working stress, is also automatically inputted in the similar manner, if the
load conditions are normal. The location specification by an arrow also
automatically fills in the 6th item where applicable codes and standards are
shown and also in the7th item where applicable inspection methods are shown.
The 6th and 7th items corresponds and the inspection methods which are
specified by the code shown in the 6th column will be shown in the 7th. In this
example, the 6th item shows the name Fire Prevention Law and the 7th item
shows Ultrasonic Inspection, which is one of the applicable inspection techniques
specified by this law to the welded joint under these conditions. In the large box
under these 7 items, the NDI procedure specification is shown.
When we push the 7th button of inspection techniques, then another applicable
technique under the Fire Prevention Law will be shown. We can refer to all the
applicable technique by continuing the clicking procedure of the 7th button.
And if we click the 6th button, then we will know what other kinds of codes and
standards are applicable to the welded joint under these conditions.
This system can be used for design support, too, because if we click such
buttons of materials (2nd), thickness (3rd one), etc, then we will know what kinds
of codes and standards should be referred to.
50
*\7 9
A/DJEBtttX '
_ | ix
K UB/S
LJUUUIIUMIIJ
4 M
IPU
Fig.1 Conceptual Design for Ultrasonic Testing Equipment Using Real Time Unix
52
UES
9. HB ( f m
lSSStJHx*^*
98. ne am
4Jg
CD
2:
7tfg
CD
CD
Q' Q
Jg5ft
536ft
CD
CD
I
1
S37ft
SS9ft
..5 O I 5
98.1 MB fffl
14 J i g
CD
CD
sssft
S5ftft?!
234K5S
i:-.
221
* 6 ft. B a a g a a a
Stift
S5ft
9B.I MB (Sffl
7JSg
Ai !
CD' CD CD
357 ft
i.*J.
.SITU
D CD
CD
39 5
S9
CD
ftftl
225
I2JSB
.
/.
"Il
226
A\
t><.
Illl
ai
S E ft
98.1 MBfJS
223
SGftWl
9B.I MB lffl
SS I I ft?!K
9 3S
1 .
Sg 1 o ft 'SS I O ft
CD
ft
"'
224
23'
ui
222
lii.
.01
CD
58ft
sena
CD
SlOft
3&
CD
SS I I
234K3
234K5S
1:..
Kj.
rii
(116.1
E16.2
35
t.l.
At-
AV
li.
11 t i
1 J
IIII
2 JS?
2 3^<7)(1
1: II
I
23^5(3
A::
V;7.
Un
3JS
ll-lt
A\
1JS
;!K32
5)
ttJSI
&
:7.
-b,
k I .
0 u6 .116
[1)5.5
ng*S2
:.
^ U lsi
S3
|i:iiiii|
lilililJlrillllililililiiil lillUilililililllililllliiiiilfli'l
53
I D I Launch IEJ
Browsers
Utilities
Changes
Special
Quit...
S E S ^ S T T - * ^ - -J 5 t f
>
>
>
>
ttS:
U=e;,al(?SK
11;^;(*, ;*:5[3(C t k .
(W^)
vtomwt,.
s'JM-rjitfz.
2 T , n f t 7 5 =i ; ^ - h j l , ? 4 3 ^ 4 3
*)-R?O"iiitt?ft5W<Oe:e)2O 1 j R i k t S .
I
tas
h*<7))
J,
75WJ:
54
x"^~*
Wiif t: j/^+ $ v*
(^SS^S^J
tm
55
C gg )
fe
{_ iSt J
MK tm
J7J2ir
&&
^HSfiff(mm)3
2 0
55*
f^ftt
) Hat
{ m
) fflRt
{ as*,* ) flftirflf
tftKtt#
( m-h
&t*
mi*
JISZ2344Ofcfcf;ft* (><0'
/rf]&& 0.4-20MHZ
tt
S * * (5Z10M0A70)
!]&& 5MHz
o o
Quit
56
Host
PC98 PC
DOS/V PC
AppleMacintosh
Other PC
Communication Software
(PC98 PC)
BigMode4.0
31incs
RS232C
RS-232C
N
[
[
t
[
[
t
9 *<
l 7
s A
I I
N D I * -r l 7 ? / X f A ^ A C
C <D * -y h 7 - ? > x r A ic m -t * m i
s
# m m L ^ a m * I SI S SB I 3 14.
tfl EH Wl ft S f ri f S ft *
e-mailtyoshimgbfu. t m i t . a o . j p
Tel : 0425-83-5111 ext.3641
Fax : 0 4 2 5 - 8 3 - 5 1 1 9
v e S B M C e B C B B B B C a a B B B B K K H B
N D I 0 0 0 0 1 K S
Pi
N D I 0 0 0 0 2 IP i*
U l^
N D I 0 0 0 0 3 M ta ff- fi
N D I 0 0 0 0 4 rik ?f
^
N D I 0 0 0 0 5 fil II
N D I 0 0 0 0 6 * CK t Jt
N D I 0 0 0 0 7 'h ill
f ^
N D I 0 0 0 0 8 {t * * 3s H
N D I 0 0 0 0 9 i l ift
0
N D I 0 0 0 1 0 l m
W
^ X * -t
1 D - NDISYS
B = B = H
= = = B B E B B B M B B B 0
3
3
J
]
I : B
0 B
c x m &. Ui & )
( m db - )
( t Si UJ i* Hi W ft f* )
( m iL 14 ^ i* Hi * ^ )
( & ffi I S )
( -x m * ^ )
( t l l
( M * w a
( |l| iff X M
( Ell iL 14 3* i*
/< X 7 -
1 D H t ( A ^
ndlOOOlO
/ x 7 - K * L T < ti ^ : # # # # # # # #
H ^ - )
IA )
)
i * ^ )
K - ########
[A]
[]
[C]
[D]
[E]
[]
- ^
A | N D I *
S W f 9 y D 3a f 7K f
* + h
^ _ ^
^ ;l y' y
r- 7 -
L I y >r >
M1 ^ - ;i/ .f 7 ' x
N1 7 f ? ^ fij S
ro] | 7 f ^ ^ I f f i L
r fr
[P]
[Q]
^ n ^ ^ A a ^
K PDI 3 t ~
R L < t l'
[T]
[U]
[W]
[]
[]
[?]
z^mnm^
T^feXtK&iftl
'y rum
/ A l>" y l Y l*
^ fr 7'
S R L < r S n : d
D - y * (1)
! i i ] Sfc S *
! 2) 3 t
!
!
!
j
sp
[3]
14] T
[5] 3 S
[6] f t
C/i
00
Vi L < ' : 1
& te S
! IH IR IS ( I S 0 * f t )
! [2] B * I 3 I ? & & ( J I S )
! [3] B * $ K g & S 1 8 t t C D I S )
IR L < ^ : 1
I R t [yy-mm-dd] B tt ) : r
JUR L T < ' * ^ ( [W] < [B
Fig.11 Submenu (1)
-NUM00001
00002
00003
00004
00005
00006
00007
00008
00009
00010
-R.DATE95-02-13
95-02-13
95-02-13
95-02-13
95-02-13
95-02-13
95-02-13
95-02-13
95-02-13
"
-R.TIME17:20:27
17:22:48
17:26:05
17:30:45
17:33:12
17:36:46
17:44:36
17:46:39
17:49:02
-SENDER
NDISYS
NDISYS
NDISYS
NDISYS
NDISYS
NDISYS
NDISYS
NDISYS
NDISYS
*?*<*^
-NUM-
00000
: $m
-R.DATE- - R . T I M E - -SENDER-
f+m)i|:Soi>DLfc
-CONTENTS-
-NUM00001
00002
r * H
C 1 o > I U
**
fif&JEfci'J'X^oiHR
61
fpi|
1
2
3
4
5
t
7
8
9
10
M
12
13
14
15
16
17
i*
iaf
xnK&icour
tt ie J: t
S H j | i i i i t * i i s#iaa:Kffi? JM
um
SH
Sil SI f i l l i 23 L T 7 ^ 7
XtftRcXrfrW
X-Ray S t a n d a r d s
e p a l r Welds
for
Production
and R
SH
56
gf&M
1
1
1
1
1
1
17
22
24
1
1
1
1
1
1
1
2
2
2
24
30
1
1
2
2
52
63
66
1
2
2
1 .
68
19
3B
2
2
2
2
2
2
2
3
' 49
80
trlWUi
2
2
2
3
4
4
92
112
122
4
4
1
1
1
1
132
139
2
Ff. S
2
2
2
2
2
2
2
2
2
2
1
1
SJFSOi
IHffiSftl
g&fffiMtir
gft
lX x i a a ^ K J : 6 i a i f t i i i < i w>
XafK1BH::ARS*i|8i|fi^
1X ffljattJf^t*ffiWtt
V-5,
o*K!r
f* Fl., n i * II
EPiia. 'ra isn-
SM EMS? OM BU
*Sfl
^xS:
? .i-ni^/;ioookvXiAl >!
S H w/t
SKr?tftl
S H (St tyi ^- y -, H fe ( S t a n d a r d G r i d Method AUfiltftia
)ICfciXttg
HDTB-~lO& ( I 9 4 9 ~ 5 l )
tt
1
1
2
*8
JHt4V;ig2RxtftiSiaiftffii (nsg)
xj8nioo^.iou - xttisiaico^T
1
1
1
ASMEdW s - i l W i l N l i l i l i K
SH *B!xttU[HiaUco^T-i
LTXfflXWS^i: L T -
' H
SH
iiii m s s
N.C.Miller,
G. H. Tenn
S
13
40
45
50
18
19
20
21
22
iftx
54
57
69
I X a Ai t ic j : s m si rs m S SE M A in o H%^ * - | J
m K R -rus B ICO H T
-!?4. iljJHinB
rais=fin
24
25
26
iX i I 8 B M 5 l W ; f c f S f f i l c o U T
27
28
29
30
31
32
33
34
35
36
M * 3iifeiftr0ilSffiicoUT
1 U m o ' J t i I c o U T
SH IKIalSSSrlRJSffi
S H -Y -v ic fc i i, MEStft | Hi ffi
S H X t l 7 -f ^ A j I S ' i f f i l C - D ^ T
S H aeWllrffi O f i l i l
.SH n i W o i f l f i i A a t
S f i i;i! s ni if\ /s ra * ytr ?, ta
S H HH!fe1rt5|:tiffilc|U|-r S i t f W f M i l l f
S H X W R ' ' y -7 t le s i|g|8-T- ti
i * BetalronlC J; i Radiographic o U T
IX un is ia Ri ic j : (i %
f o x ta I ai
ic o f*JJt.
u
iBlfc ( K- y )
*S1SWl-llin/;Si??Outll|)^|iiBf
'^msspji
ri*
G. II. Raiimei s t e r
1
1
11
12
15
25
26
29
37
SH
M nu
45
38
39
<0
affi
S H Bi Ili VA t ili ffi ic J: i Ili * M iEffi- m il:-DB
S H l|i co y -i - te <> t Bli 13 li
S H i 'J i ? 0 A|C J; iftliV/IOAkiHtJSJa^W dii m mi
2
2
2
2
2
48
56
61
<l/4"4rMT2HKKiti|H'IIE?J>
63
SG/BC61 ADMI/RDT/sd/doc-5 71
ERA
TECHNOLOGY
'tmmm
ERA
W/69/FL0052B
TECHNOLOGY
STRESS
TEMPERATURE
MECHANICAL
ENVIRONMENTAL / MECHANICAL
<
to
O
en
Q.
TIME
DAMAGE
DEFECTS
66
Damage Processes:
Creep
Corrosion
Carburization
Hydrogen Attack
Fatigue
Erosion
Temper Embrittlement
probabilistic
damage/cracking
assessment routes
ERA
TECHNOLOGY
67
68
about 15,000 tubes [1]. Tube degradation could occur due to thermal and mechanical stresses,
fatigue and creep, wear and fretting, and corrosion. Depending on plant operating conditions,
one or more of the above causes can damage the tubes [2]. The degradation of tubes in steam
generators causes the most failure rates. Therefore, the inspection of steam generators is critical
to the safe and economical operation of nuclear power plants.
In the past, eddy current inspection has proven to be fast and effective in detecting and
sizing most of the degradation mechanisms that occurred in steam generators, and therefore it has
been used as the standard technique for steam generator tubing inspection. However, eddy
current phenomenon is described by three-dimensional, nonlinear, partial differential equations
with very complicated boundary conditions. Modeling analysis methods are very difficult to
apply for test data analysis. Only visual observation technique is currently used for eddy current
test data analysis. This technique requires highly trained personnel and is labor intensive.
Human error in performing the analysis of test data is the main drawback for its successful
application. Some other disadvantages of eddy current inspection method include:
1. Eddy currents are affected by minor variations in the permeability of the test object.
2. Eddy currents are affected by the orientation of a flaw.
3. Sensitivity is much greater at the test surface closest to the test coil.
4. Multi-frequency test has large and complex databases.
The current research and development of eddy current inspection is directed in part towards more
quantitative test results and conclusions, and towards reducing human interaction with the testing
process [3].
The research undertaken here focuses on the problem of automating steam generator eddy
current data analysis using an integration of expert system, database management, artificial
neural networks, digital signal processing techniques, and decision making using fuzzy logic. In
recent years research in neural networks has been advanced to the point where several real-world
applications have been successfully demonstrated [4]. These include automated pattern
classification, signal validation, nuclear plant monitoring, plant state identification during
transients, estimation of performance related parameters, underwater acoustic signature
classification and text recognition. Fuzzy logic and expert systems have been shown to be highly
successful, reliable, and superior in performance to conventional systems [5]. Utilizing a fuzzy
logic representation offers the advantages of describing the state of the system in a condensed
form, developed through linguistic description and is convenient for applications in monitoring,
diagnostics, and control algorithms [6]. The integration of database management, neural
networks, fuzzy logic, expert systems, and digital signal processing techniques for the
automation of NDT signature analysis is a unique feature of this research. This research will also
provide a technology base for the safety assessment of system and subsystem technologies used
in nuclear power applications of artificial intelligence techniques.
69
Peak Detector
General Information
U ser
Interface
Data Representation *
Flaw D e t e c t o r
Calibration Data
M agnitude Calib.
* Phase Calibration
Defect Sizing
K n o w l e d g e / R u l e Base
70
Rule base
The rule base consists of logical steps for data analysis and rules for decision making.
Data calibration
Data calibration can perform the null point determination, phase angle calibration and
magnitude calibration.
Peak detection
Peak detection scans the ECT measurement raw data and finds all the peaks.
Data representation
Data representation reorganizes the ECT measurement raw data using different data
representation algorithms.
Fuzzy flaw detection
The fuzzy flaw detection system prepares the fuzzy system input and fuzzy membership
functions, executes the fuzzy inference engine program, and finds the flaws in the ECT data.
Defect sizing
Defect sizing function block consists of trained neural networks for defect sizing.
ECT DATA MANAGEMENT AND CALIBRATION
Fifty-seven sets of multifrequency ECT data were obtained
from the EPRI NDE Center. These
data files contain the pitting,
ODSCC, and field eddy current
test data. These data files are
stored in the DRES format. The
DRES data have both calibration
data and actual data. Figure 2
shows a typical impedance plane
(resistance
versus
inductive
reactance) trajectory of data from a
differential eddy current probe
Figure 2 A typical impedance plant trajectory of data
transducer.
from an eddy current transducer.
ECT Data Management
The size of each EPRI ECT data file is about 35 kbits. Each data file contains sixteen types
of signals coming from eight measurement frequency channels. For such a large, multi-frequency
data file, it is necessary to develop a procedure to manage and compress the data. The data
management system of EDDYAI has the following main components.
71
Calibration data or actual data: User can perform data calibration procedure by selecting the
calibration data, or start the data analysis procedure by choosing actual data from the DRES format
data file.
Selection by measurement frequency: The user can select the ECT signal by measurement
frequency. Therefore, we deal with only 1/8 of the ECT measurement data for each data analysis
cycle.
Peak detector: The ECT data
can be classified as normal data
or unusual data. The ECT
signal for a good tube with the
same structure appears as a
straight line. This kind of data
is defined as normal data. The
ECT signal that appears with
peaks, curves, or big jumps
indicates changes in tube
structure (such as tube support,
tube end) or tube damage (such
as pitting, thinning, etc.). This
kind of data is referred to as
unusual data.
R
Indices
Threshold
Count
Width
^W
*K~+.
L/iJ
(
-V
W.
A peak detection technique is used to find unusual data parts. Figure 3 shows the inputs
and outputs of a peak detector. We analyze the input sequence R for valid peaks and keep a
Count of the number of peaks encountered and a record of Indices which locate the points at
which the Threshold is exceeded in a valid peak. A peak is valid when the number of
consecutive elements of R that exceed the Threshold is at least equal to the Width. We use
radii as the input sequence R. The radius is defined as the distance from the current position to
the null point.
ECT Data Calibration
Eddy current inspection requires standard calibration specimens (tubes) with artificial
defects for initial instrumentation set-up and subsequent signal analysis and interpretation. These
tubes should be identical in material and size to tubes to be tested. Minimum calibration
requirements include inner diameter (ID), outer diameter (OD) and through-wall defects [7]. For
eddy current data analysis, there are three steps in data calibration: null point determination, phase
angle calibration and magnitude calibration.
Null Point Determination: Null point is the ECT signal in the flaw free region of the
calibration specimen. It is the original point for phase angle and magnitude calculation. An
accurate null point can make the decisions based on the information of phase angle and
5
72
magnitude of the ECT signal more reliable. Two procedures to find the null point in the ECT
calibration data have been developed in this project.
The first procedure for null point determination uses the mean value of the flaw free
region data as the null point. The drawback of this procedure is that the lift-off effect can change
the true null point and the equipment noise will also reduce the accuracy of the null point
estimation. A second null point determination procedure has been developed using the
intersection of phase angle slopes of outside diameter (OD) standard defects. In the ECT
calibration data set, there are five OD defects varying from 100 percent through wall depth to 20
percent through wall depth. These OD signals should share the same null point. Any two phase
angle slopes of these OD signals can determine the null point by finding the intersection of the
two slopes.
Phase Angle Calibration: In ECT signal analysis, phase angle information is used to find the
flaw locations and the flaw depth. In order to remove the effect of lift-off, the phase angles of
100% through wall OD signals for different frequencies should be placed at an angle of 140
degrees. If the phase angle value of a 100% through wall OD signal is not around 140 degrees, it
should be rotated to 140 degrees. This is the phase angle calibration procedure.
Magnitude Calibration: In ECT signal analysis, the magnitudes of the 20% through wall OD
signals are usually set to 4 volts. The magnitudes of all other signals are converted to voltage
scale by comparison with the 20% OD signal. This is the magnitude calibration procedure.
FUZZY LOGIC DECISION MAKING FOR FLAW DETECTION
In ECT data analysis, the decision for flaw identification and estimation may have a high
uncertainty because of a large number of defects with overlapping patterns and due to
information from multi-frequency tests. Fuzzy logic may be used for decision making in this
situation which is characterized by uncertain and/or non-crisp information. Fuzzy logic is the
logic of fuzzy (approximation) measurements and is believed to be similar to the human decision
making process . The beginning of fuzzy logic is most widely associated with Lotfi Zadeh. In
1965 Zadeh wrote the original paper formally defining fuzzy set theory from which fuzzy logic
emerged [8]. The important difference between the fuzzy logic approach and the traditional
approache is that, the former uses qualitative information whereas the latter requires rigid
mathematical relationships describing the process.
Fuzzy logic is characterized by a linguistic variable whose values are words or sentences
in a synthetic language. For example, we can define temperature as a fuzzy variable to take
linguistic values of low, medium and high. The values low, medium and high are called "fuzzy
values." The definition of "low" or any other term depends on the user's judgment. In fuzzy
logic, such a judgment is formulated by a possibility distribution function (often taking values
between 0 and 1) and is referred to as a "membership function." The key issues in fuzzy logic
applications are:
73
Membership
F unctioi IS
System
Input
Hr
Fuzzification
jr
Rule
Evaluation
Denazi
fication
System
Output
k.
1
74
75
detection procedure. In the data scan procedure, the Width was set to 3, and the Threshold was
set to 0.5 volt. Twenty peaks were found after data scan.
In the fuzzy flaw detection system, the membership functions were established by using
the calibrated OD defect phase angles. As described in Figure 5, once the values of points A, B,
C, D, E and F are determined, the membership functions can be obtained. The phase angle
values are converted to the conventional form which uses the 180 degree axis as the 0 degree
axis. The twenty peaks were tested using the fuzzy flaw detection program. Table 1 shows the
results of the test. From the results, it can be concluded that the fuzzy system can detect flaws
with a high degree of success.
Test#
Data
Location
Desired
Fuzzy Logic
Decision
Decision
125
118.4
195.7
273.3
9.8
Unknown
PN
130
118.4
195.7
273.3
9.8
Unknown
PN
207
177.5
108.5
154.2
161.2
OT
OT
214
177.3
162.3
155
159.1
OT
OT
293
39.6
40
40
40
OD
OD
318
170.1
166.8
195.1
225.7
OT
OT
322
167.2
166.9
197
226.1
OT
OT
324
165.7
167
197.5
223.8
OT
OT
351
86.9
68.9
56.3
47.8
OD
OD
10
406
102.3
81
63.3
51.8
OD
OD
11
459
121.5
98.4
76.9
60.8
OD
OD
12
511
154.5
122.9
95.2
71.8
OD
OD
13
530
138.6
162.1
201.9
233.5
OT
OT
14
532
139.1
159.1
199.4
233.5
OT
OT
15
545
61.3
154.2
205.4
240.4
OT
OT
16
563
351.2
348.3
205.3
19.2
Unknown
PN
17
596
45.4
349.8
2.8
276.1
Unknown
PN
18
598
51.3
355
298.9
276.3
Unknown
OT
19
614
168.5
158.4
167.8
187.7
Unknown
OT
20
651
170.1
164.3
175.9
195.4
Unknown
OT
76
Pattern T y p e or Pattern
."'"
Parameters
t t i
,".*-'"'*.,j..:".',.' "'
.""'..'.'.'.'-V"" ;"'/.';./.
:';:'' . I '
"":--:.
v"'* 1 **
Hidden Layer 2
*'
"""*-. "
_ . - * " * *
ft,::
.'" :
, * '
In tp ut
^m
It--'''
-**""
" ""*'"."
"**.. .-'"
. - - * " " *
*-*-v-_L
Jf>~*
' ' ' " - ' " ' ' . ' ' * '
--^*.-.'"""
""*.*-
*l * ; " "
"*""***'"." "
:
L-f*"
Hidden Layer 1
""*.
" *
**
..VV*:".7"*.'*
" * * * - *." * * * *
. - * "
. *
* * .
* * * .
*"* l.
**
10
77
78
of data points in this region (in percent), and (3) the average value of the data. This technique
maps the structural shape information of an object into a fixed feature vector of real numbers. It is
robust to object variations in position, orientation and size.
Development of Computational Neural Networks for Depth Estimation of Pitting Defects
A threelayer backpropagation
0.65
neural network was trained using the
i^
CS
preprocessed magnitude and phase of
^
4
if
eddy current pitting data for depth
1
estimation. In the pitting data set, we
D.5
!
a
chose 29 samples for training and 6
1
V,
0.45
!
samples for testing. The neural network
il
has 100 input elements, 50 hidden
l
\ \
elements, and 1 output element. The
\
i
I
learning coefficient was set to 0.2 for the
I
first 5000 iterations, and 0.15 for the
il
succeeding iterations. The momentum
\
term is set to 0.4. The normalized
G.2
cumulative hyperbolic tangent transfer
u u r r or PATTERNS
function is used as the nonlinear transfer
tuoric Out:
Desired Ou t au t
function. After 28,000 iterations, the
normalized
rootmeansquare
error
(RMS) decreased to 0.00001. Figure 7
shows the recall results using the six Figure 7 Recall results of pitting defect depth
recall data points. The RMS error for
estimation
the recall data was 0.03.
<s
79
Testing and improvement of fuzzy logic-based flaw detection system using an extensive ECT
database.
Analysis of the effect of noise in ECT data and its compensation.
ACKNOWLEDGMENTS
The research reported here has been sponsored by a grant from the Electric Power
Research Institute, Steam Generator NDE Program.
REFERENCES
1
V. S. Cecco et al, "Eddy Current Testing," Vol. 1, pp. 138, GP Publishing, Inc., 1987.
10
11
B. R. Upadhyaya and W. Yan, "Hybrid Digital Signal Processing and Neural Networks
for Automated Diagnostics Using NDE Methods," NRC Final Report, NUREG/GR-0010,
October 1993.
13
81
Summary: Every switching operation within a power substation is based on preestablished conditions which stem from engineering studies considering both equipment and
component hmitations and constraints inherent to the switching itself in relation to the
system situation. Despite all its evolution, the analogic solutions used in blocking, control
and monitoring functions of switching actions do not eliminate the possibilities of erroneous
interpretation of the recognition of pattern conditions. With the ongoing digitalization of
power substations, it becomes propitious and recommendable the introduction of logic
computer programs which perform these functions more safely in a wide range of distinct
and adaptive configurations and conditions.
The aim of this intelligent automatic system for the switching operation is built using
the support of Expert System (ES) techniques, in such a way that the switching plan
constituted by and through the traditional logic is enriched and enhanced with the experience
and heuristic knowledge from the operators and dispatchers. Study cases which permit an
inference analysis of the problem solution with an evaluation of the results obtained are
presented. The process for inclusion and validation of new knowledge or components in the
substation system is also discussed. Finally, a comparison between the proposed solution
with the conventional system and procedures is carried out, pointing out the advantages and
limitations of this expert system application.
1. INTRODUCTION
The introduction of digital technology and the development of Expert Systems (ES)
techniques has made the intelligent automation of switchings in substations (SS), something
possible consolidating every operating technology besides aggregating advancements to the
supervision and control operating functions.
Any operating intervention entails the elaboration of a Switching Plan, in which the
actions and commands are sequentially linked. In the generation of the Switching Plan the
equipment operating limitations, the operating constraints inherent to each command, the
S2
83
Management of
Measurements
<y a
\.
Diagnosis
JL
Supervision
3t
Control
Protection
(a)
Preventine Actions
State:
Topology Management
Equipment Supervision
Operative State Surveillance
Loading Surveillance.
Availability
Load Dispatch
Service Life
Operative Limits
Control Actions
Voltage Control:
Stability Control and Prevention
Tap Control
Safety and Integrity Function
Reactive Control:
Capacitor Bank Switching
Control of Reactive Power
Switching Plan:
Sequential Switching
Transfers
Isolation
<
Interlocking
Correctivce Actions
Alarm Handling
Oscillography (Recording)
Load Schedding & Restoration Schemes
Analysis of Occurrences
Corrective Strategies for Emergency States Restoration
(b)
Figure 1: Operative Architecture and Functions of a Substation.
84
Preventive
Actions
Elimination of
SAFE
Corrective
Actions
Control
\ t h e Fault
Actions
S/
\
/
NORMAL
DISTURBANCE
/ \
Control
Actions
.,..^1
J
ABNORMAL
change of state in
view of the occurrence
85
locally.
The knowledge of the causal, temporal, and functional relationships among the
evidences, the hypothesis or the parameters of the models which may be used in the solution
of an operational problem, comprises the formal part of the Knowledge Base. However, the
majority of the details of the specialization tend to be grouped and assembled in heuristic
rules, generally developed in the ming of the experts because of extesive observation of
typical results. Such rules may be combined and compared among themselves, in order to
reach a logically consistent, but inaccurate solution (Fuzzy Logic).
The heuristic rules attempt to substitute the need of memorizing details or particulars
of the SE or the Electrical Power System (SEP), well as experiment all the range of
operative conditions, the possible contingencies and their restoration sequences. Therefore,
it allows the decision-making to be made with basis on a wide knowledge, whether formal,
existential, factual or empirical, properly stored in the form of "facts and rules" in the
Knowledge Base.
Thus, the Knowledge Base is structured using as a basis Facts and Rules which must
portray all the knowledge available about the SE, its components and equipment. Is must
contain a detailed description of the SE, its main operational characteristics, such as
topology, static and dynamic attributes, switching, schemes, restoration guidelines and
philosophies.
The Inference Motor enables the ES to infer knowledge, by using the information
stored in its knowledge Base, in order to obtain some results which did not exist "a priori",
and define the solution of a problem or subproblem. In the inference process, the
information derived is not completely new; it actually results from the interrelationships of
previously stored information.
4. SWITCHINGS IN SUBSTATIONS
All the switchings in SS follow operating criteria pre-established by engineering
studies, which consider both the switching equipment operating limitations and the operating
constraints inherent to the switching itself.
-
The supervision and control functions are conceived aiming to monitor the
switchings, aiming to safeguard the integrity of equipment and people, and to provide its
correct suitability to the operating technology [4].
Thus, in face of the operating flexibility allowed by each topology, and considering
the context of the Power Electric System (PES) to which it belongs, a SS may have a ES
86
automatizing its switchings, should they be either individual or part of a sequential switching
plan.
-
A switching can be broke down in a set of actions and commands of the kind:
action (open/close - block/unblock);
command (verify open/verify close);
Considering as an example the data of the bay "IK", output of a Transmission Line
(TL), as presented by Figure 3, the following signal set displayed is obtained:
V(K)
voltage measured in bus K
V(1K) voltage measured in the output of bay IK
I(1K) current in the output of the bay IK
1K3
disconnecting switch
1K5
disconnecting switch
1K5T
grounding disconnecting switch
1K6
bypass disconnecting switch
1K4
circuit breaker
1K50/51
line overcurrent protection
1K50/51N
neutral overcurrent protection
where, for example, 1K3 means:
1 - circuit number 1
K - voltage level 138 kV
3 - disconnecting switch near the bus
TL to SS2
SWITCHING
check the voltage in bus BK and in circuit IK
check closed switches 1K3 and 1K5
check closed breaker 1K4
87
4
5
6
7
In the inference process for actions and commands which will make up a switching
plan, the following steps are found:
a) to receive the switching;
b) to identify the circuits involved;
c) to check the state of the equipment (energized/ disenergized);
d) to check the selector switches setting;
e) to check the switching equipment setting (opened/closed);
f) to refer to the rules concerning the problem interlocking, orientation and operating
constraints);
g) to layout the switching plan, generating necessary and sufficient actions and commands
in sequential order;
h) to validate the switching plan (simulation).
The conception and development of an automatic system able to switch a SS,
undergo necessarily the analysis and evaluation of the paradigms previously established and
the definition of the problem solving strategy. The characteristics inherent to the problem
recommend the use of Expert Systems as a more suitable tool to its equationing and solution
[5]
5. EXPERT SYSTEM FOR POWER SUBSTATION RESTORATION - ESRASE
An Expert System (ES) is a program able to treat a certain problem, within a specific
domain, imitating the behavior of a human specialist of this field. The ES is based upon
knowledge (and not on data), and solves complex problems through use of inference and
knowledge methods, which are structured with high flexibility degree in face of the access
associative to the rules [6].
The ES use knowledge outlined by symbolic representation and apply heuristic rules
through deductive processes, which create inference path for the problem solution. Its
application is justified in the cases of unavailability of an established theory, where there are
doubtful data and information and troubleshooting problems. Since control is separated from
knowledge, knowledges are allowed to be withdrawn, included, renewed, updated or
modified, not causing any change in the program operating structure.
By considering the circuit-breaker represented in Figure 3, with its respective
disconnecting switches, its topological and operating characteristics are stored as FACTS in
structures of the kind:
connectionf 1 ","K","B","K")
switching_switch(" 1 ","K","4","on")
selector_switch(" 1 ","K","4-43R","on")
protection("l","K","67M,MofF*)
measure(" 1 ","K", 138,250,60,0.9)
ss
limit_measure("l","K",128,144,-l,500,59,61,-0.8,0.8)
equipment(" 1 ","K",line_source("SEl ","SE2","initiator"),"on")
where:
1 = number of the circuit
K = voltage level 138 kV
B = bus
4 = circuit-breaker
4-43R = reclosing selector switch
67 = overcurrent protection
138 = voltage measured
250 = current measured
60 =frequencymeasured
0.9 power factor measured
128,144 = voltage limits
-1,500 = current limits
59,61 =frequencylimits
-0.8,0.8 = power factor limits
linesource = equipment kind
SE1,SE2 = TL terminals
initiator = terminal energization characteristic
By its turn, the RULES describing the functional and operating characteristics of a
switching, in which the operating philosophy and the heuristic knowledge are stored,
forming the Knowledge Base, have been structured through Production Rules of
standardized way in structures of the kind:
mterlockmg_switching_switch(CIRCUIT,S\VITCHING, SWITCHES_OPENED,
SWITCH_CLOSED)
mterlockmg_measure(CIRCUTT,VOLTAGE,CURRENT)
pmlosophy_switching(CIRCUIT,CLASS,SWrTCH,ACTION,CRITERIA)
criteria_switcrung(SWITCH,ACTION,ORBENTATION,CONSTRAINT)
From the FACTS and by using the Knowledge Base, the actions and the commands
which will make up the switching plan are infered, in the following standard way:
switching_plan(SEQUENCE,CIRCUIT,VOLTAGE,SWITCH,SWITCHING)
where SWITCHING can be either an ACTION (open/close) or a COMMAND (check the
voltage presence).
Figure 4 presents the partial listing of the predicates of the rules which make up the
Knowledge Base.
89
90
mterlockmg_switching_device(,,3","on",[,,3","4","5T",],[ ]).
interlocking_measure(,,3"," ","A").
interlockmg_measure("4"," "," ").
interlocking_measure(,,5T",,,V","A").
10
91
command_switching_device(" 1 ","K","5T","close")
"INTERLOCKING FAIL - Reason:
- 1K 3 close
- 1K 4 close
- 1K 5 close
-VOLTAGE (IK ) = 138 kV
- CURRENT (IK ) = 50 A "
"Close command 1K 5T locked"
7. CONCLUSIONS
The application of ES in the operation of the SS allows to add considerable
functional capacity to the operating system. With the definition of a new set of operating
techniques, taking advantage of all the existing background knowledge, it is possible to
perform the SS operating automation.
The concepts of Information Technology and Process Re-Engineering should be
applied to the products and services. So as to establish a new operating paradigm it is
necessary to re-evaluate the purpose, orienting for the processes and not for the tasks. So as
to achieve the goals proposed, it is necessary to identify accurately the service to be
rendered and to elaborate the architectures necessary to the Information System and able to
support it.
Only this way it will be possible to reach the goals such as supported evolution,
better swiftness, efficiency, establishment of relational standards, personal capacitation,
constant search for total quality and reduction of the operating costs.
8. REFERENCES
[1]
[2]
92
[3]
[4]
C.C. Liu and CIGR TF38-06-03 - "Practical Use of Expert System in Planning and
Operation of Power System", Electra, 1993.
[5]
[6]
12
93
Introduction
Advanced life management of power plant operating high temperature pressurised systems
and components is based on an interactive strategy of:
a) design and re-design complying with formal requirements of regulatory codes and
incorporating component life assessment (CLA ) by analysis of the component life
exhausted by creep and fatigue;
b) non-destructive examination of systems, i.e. reliability and/or cost centred
maintenance of systems, components and locations, in particular the determination
of optimal inspection intervals;
c) a multi-criteria decision making based both on the regulatory guidelines and
experience i. e. heuristic knowledge to manage the decision to replace, repair,
operate, reduce load or re-inspect the component(s) concerned.
The solution of this engineering problem requires the performance of different-nature
information processes, which range from exact algorithmic calculations and data processes to
less formatted heuristic knowledge, fuzzy logic's and engineering decision processes. Each
of these processes can nowadays be represented or supported by modern information
technologies (IT) tools. The development of an IT architecture for plant life management in
which these different tools are combined, i.e. interfaced to functionally interact, is a
tremendous challenge.
In the background of the whole plant life management process and the corresponding IT tool
are the data sources (i.e. primarily those databases regarding material data, inspection data
and component/system data).
This paper tackles the issue of material databases, taking the High Temperature Materials
Databank (HTM-DB) as an example of "European databases" developed at JRC Petten and
widely used in several large European projects. One of the important problems to solve in
this architecture is the interfacing of the different materials information sources which are
needed in the form of databases and associated algorithm libraries.
2.
Four major groups of intelligent computer systems in the area of power plant maintenance
and diagnostics can be identified.
In the first group are the tools that can be broadly classified as knowledge-based (expert)
systems (KBS) (see Figure 1), and can be used (usually by an expert) to set a realistic
inspection or maintenance interval, e.g. Boiler Maintenance Workstation of EPRI (USA) for
certain boiler components, or SOAP - State-of-the-Art Power Plant System) - see Dooley and
Institute of Advanced Materials, JRC Petten of the European Commission, The Netherlands
" MPA Stuttgart, Germany
see the Glossary for this and all following acronyms
94
co-workers (1993), or the system developed in the European SP249 project (Jovanovic,
Friemann, 1994). These tools define the inspection intervals implicitly, i.e. they can usually
provide a engineering assessment of the corresponding remaining life. The user is then
supposed, in each particular case (i.e. usually per one component and/or location), to decide
when is the right time to re-inspect again.
Such KBS systems are currently developed at MPA Stuttgart under sponsorship of the
Association of German Electric Utilities - VGB, for instance the ESR System (Jovanovic,
Maile, 1990). Much of the recent research effort in Europe, USA and Japan has been devoted
to development of such KBS's applied in the fields of power plant and structural engineering.
Some of these systems (Jovanovic, Gehl, 1991). In general systems in this group give only an
implicit recommendation (based exclusively on engineering factors) on when to inspect one
component - (KBS's for single problems).
Boiler maintenance
(e.g. EPRI-BMW)
Piping analysis
(e.g. MPA-ESR)
(e.g. EPRI-HEATEXP)
^
Coal quality impact
(e.g. EPRI-CQIM)
Generator monitoring
(e.g. EPRI-GEMS)
Vibration advisor (e.g. IVO, EPRI)
Other systems...
Figure 1: Some KBS's usedfor single problems (the "first group" of systems)
The second and the third group are databases and database-like systems. These are
developed especially for material data, non-destructive testing (NDT) results and for plant
component/system data. The nature i.e. confidentiality of NDT results has led to the fact that
many of the NDT result databases have been developed by utilities. The databases for
component/system data are usually developed and delivered by component/system
manufacturers. In general, these systems give only the possibility to store data from previous
inspections and/or component/system data.
The fourth group are systems for component/system state monitoring. These systems are
developed both by manufacturers and by utilities. In general, these systems give only the
possibility to monitor more components, but the recommendation on when and how to inspect
is "mechanistic" and based on extremely simplified assumptions.
The desiderata for the further development of these systems towards the 'ideal system' (see
Figure 2) refer to the following objectives:
a) refinement and additional features
provide guidance on e.g. timing of outages, optimal intervals between outages, range,
scope and methods of required inspections,
consider variable operational conditions,
include heuristic knowledge,
95
Inspection
^results
databases
Internal
material
database:
SP 249
llijaefriall:i\
material .III
database: I
HTM-DB J
IDEAL
SYSTEM
im
PLANT ENGINEER
/
ANALYST
Figure 2: Ideal system (KBS) linked with, other databases and other KBS's
3.
96
module. The whole system is designed as an engineering "tool box", built on top of
commercially available software (Jovanovic, Friemann, 1994)
Object oriented programming (OOP) appears both at the level of the overall SP249 KBS
architecture (each part of the system is an object exchanging messages with other ones) and
at the level of its single parts. The architecture allows to introduce new modules or to
reorganise the existing ones any time. The hypermedia based parts/modules "cover" the
background information built into the system:
the CLA guidelines,
frequently used codes,
standards and other documents,
case studies.
The system covers:
decision making according to SP249 CLA guidelines, i. e. a decision aid for
making the "3R decision": replace, repair, run,
(This decision is based partly on the regulatory guidelines, partly on the
experience and heuristic knowledge incorporated into the CLA guidelines.)
recommendation regarding the annual inspection (revision),
damage analysis,
Using the system the user is supported by an "intelligent environment", helping him to:
1) retrieve data (about material, component, etc.),
2) evaluate/calculate data,
3) retrieve necessary standards,
4) obtain advice,
5) find an optimised solution for his problem (see Figure 3).
Materials data retrieval (1) and evaluation (2) can be done within the sophisticated
procedures of the HTMDB as an external KBS functionality and/or on the KBS side within
its internal functionality.
5333
v
P
rli
P
m
ZftB
W.
*U
r-1
r_.
tam
KBS
Larson Miller
-j-s.
nm
M
|<5^
97
98
^w
i!
^
,_
S
100
^ t :
'I1***
^^^
' 11-"^>
4L
10
1+1
1+2
1+3
1+4
1+5
*"^,
* }
1+6
99
Pool 3: Materials data from surveillance, in-pile & out-of-pile and in-plant tests (in
nuclear power plants primarily): Such experimental test results can be administrated
within the HTM-DB and used for damage assessment in reference to the information
coming from quality control tests or even other sources.
For safety reasons the authority can demand additional in-plant or in-pile tests and
surveillance tests for critical components to assess the damage and the irradiation
embrittlement or to guarantee the component integrity after emergency conditions.
Such requirements can arise for the following cases:
I. surveillance tests within nuclear power plants to guarantee the integrity of
components
the catastrophic failure of which could endanger the population in the
surrounding areas,
which are earthquake exposed,
which must be secured against extraordinary emergency conditions.
II. inpile & out-of-pile tests within nuclear power plants to secure against
embrittlement of components
which are irradiated.
III. in-plant tests within conventional power plants to secure against damage of
components
with complicated weldments in high temperature and/or stress exposed areas.
Pool 4: Materials data from quality control tests (in nuclear power plants primarily): Such
test results can be administrated within the HTM-DB. During the life-time of a power
plant they can be used as reference data for damage assessment of components.
The code cases demand quality control materials tests such as tensile and charpy-V
impact tests at different positions of the components to guarantee that the component
conforms to specification. Normally the measured test results are entered in the
component certification forms which are stored in thick files. The power plant
suppliers can administrate these materials data of all their plants within the HTMDB. Doing so the company has a fast access to the data and can easily use these data
together with material information coming from other sources and/or their evaluated
parameters for life-time analysis.
Figure 5 shows the correlation of the four material pools with KBS based life assessment and
management procedures. The materials data examined in quality control tests (pool 4) must
be compared with those coming from special in-plant tests, in-pile & out-of-pile tests and
surveillance tests (pool 3) and Non-Destructive Testing (NDT) from plant inspections to
assess the material damage, the irradiation embrittlement or the component integrity after inservice inspections or emergency conditions.
100
Component Integrity
Assessment
Inspections
elastic
inelastic
damage &
reference
analysis
analysis
failure
data
Emergency
conditions
NDT
results
/'Pool S:
Pooll:
Matenals data from :
national and international standards
Pool 2:
Materials data from
standards)
- surveillance tests
Figure 5: Possible data pools of the HTM-DB for the Plant Life Management
4.
The HTM-DB is a computer-based system for the storage and evaluation of mechanical and physical
properties of engineering alloys such as tensile, creep, fatigue, fracture mechanics or Young's
modulus, thermal expansion that are mostly used in high temperature technology. Although this is
the main scope, the HTM-DB is not only limited to high temperature materials application (Over,
Krckel, Guttmann, Fattori, 1993).
The HTM-DB computerizes the scientific process of engineering data generation from
material testing through the functions of data organization, data validation, quality control,
model-based and statistical data evaluation, to the presentation of material parameters which
find use in engineering algorithms.
The database structure covers all engineering alloys and their testing at any temperature for
time independent and time dependent materials behaviour. Its emphasis is on data from
standardised tests and on evaluation methods which are well established and widely accepted.
The database and the evaluation programs are oriented to international material standards and
recommendations.
101
Besides the experimental materials data, the HTM-DB offers materials data from standards as
additional numerical and graphical information (see Figure 8: Data catalogue). Table 1
shows the HTM-DB materials data content. There is a big difference between the
experimental materials data and the data from standards. The records of the experimental
materials data are measured data and contain, as a minimum, all mandatory information of
data source, specimen, material and test control. In most cases much more information such
as grain size & hardness is provided. The materials data from standards, however, are
average data and contain the mandatory materials information only.
Table 1 : HTM-DB materials data content
Standard materials
Experimental materials data
Number of records
approx. 2000
approx. 6000
The data management and evaluation functions can be applied to mechanical and physical
property test results reported by test laboratories in defined format and quality. Such test
results can be entered and stored in the "databank" component of the system where they can
be accessed and handled with typical databank routines and from where they can also be
taken to data evaluation by the other component, the "evaluation program library".
The HTM-DB evaluation program library is linked with the User-Interface. It contains
specific evaluation programs for data on the mechanical properties stored by the system.
Most of the specific property evaluation programs allow fitting of mathematical models,
constitutive equations, parametric expressions and regression functions to test result data. In
general the results are best fit parameters and statistical information about the data, such as
correlation coefficients and standard deviations. The evaluation programs are programmed in
VBA and C and implemented under Microsoft Windows on the PC-side with access to
Microsoft Excel for Windows. They can be selected from windows in the User-Interface.
The Norton Creep Law is shown as an example of such an HTM-DB evaluation program. It
is valid for ductile material behaviour and describes the relationship which exists between
characteristic creep rates or rupture time and the applied stress. The program has the several
analysis options:
minimum creep rate - applied stress dependence,
(It can be calculated from the creep curve data supplied by the laboratories using the
'Seven point fitting method' as defined in ASME E 647 or, alternatively, from the
delivered minimum creep rates.)
steady state creep rate - applied stress dependence,
(It can only be calculatedfrom delivered values.)
average creep rate,
(It is the rupture strain, er, divided by the rupture time, tr, and is higher than either the
minimum or steady state creep rate.)
creep rupture time - stress dependence,
(The program automatically sorts data according to isothermal criterion. Each
regression line ends at the minimum and maximum stress levels encountered at any
particular temperature.)
The user also has the option to remove minimum creep rate points which are not consistent
with the general trend where upon the creep law will automatically be re-evaluated. By
102
selecting a single point from the chart, the creep rate data can be compared with those of
neighbouring points in the data set. Through comparison of adjacent curves the user is able to
establish the reliability of each individual calculated creep rate value. An example of
calculated creep rate minimum data is given in Figure 6.
Besides the analysis options, the user has access to all intrinsic Excel functions for data
storage and data processing.
45CTC
n 41.11
12
==~ .:..'
:-::::
'
======== ss SS5S
M 55
n = 18.08
*
? 1E-03 ~:~S
h
"8 14
a.
<D
. ; :
::^:=^.- .
'
yrf
<D
U
15 '--
:}::.-;
' i r - <--
~r~SS.
===,
100
550-C
n = 10.56
200
530C
n . 13.45
: ~_ - ! y :
1E-07 70
525C
n =1172
jf
. z~r
-~.d V i Y
<=
rry
' t
sstm
=#a
* i
bs
S 1E-06
i g
sr^=r.-
500C
n 13.73
'
/r/\
,-'
^ :.--:.,Lj.
475C
300
400
500
600'C
n = 9.17
625'C
n = 6.37
IO
103
TKTIEN SESYER O l
ft
je - validation by the
data supplier
- quality control
by JRC Petten
The User-Interface and the evaluation programs are installed on the PC of the client. Output
options are available as "reports" (tabular presentations) and "table & charts" using
spreadsheet options. The data entry function and data transfer options to and from the Petten
Server are also available as parts of the User-Interface (see Figure 8).
11
104
guarantee data confidentiality, the access rights of the user to his local database and the
database of the server is controlled by his passwords and user identifications.
The User-Interface requires minimal user training. It uses advanced windowing techniques
to assist the user formulating his queries. Typing mistakes and nonrelevant queries are
avoided. It furthermore eliminates syntax errors by making the syntax fully transparent to the
user. The complicated SQL string with all the links between the HTMDB entities is
gradually and automatically built up. Active windows are shown in the foreground with a
blue frame whereas inactive ones are shown in the background with darkgray frames
depending on the Microsoft Windows setup. The buttons which are used are related to
Microsoft Windows standards (Over, De Luca, 1993).
5.
There are several concepts for the use of a materials databank for computeraided inservice
component life assessment. The HTMDB for instance has participated in the BRITE 1209
project to predict and extrapolate the component service behaviour under stress at high
temperature. The proposed use of the databank within such a materials information system
(Krckel, Westbrook, 1987) is shown in Figure 9. The databank and its evaluation programs
& models interact in a dynamic way with the FEM processor to deliver the data and
evaluated constitutive parameters for the stressstrainlife analysis. In a similar way the
HTMDB will operate as a dynamic, "external" databank within the KBS system.
The HTMDB represents many years of experience and expert knowledge in database
management, programming, material science and soft & hardware applications. A
standardised database structure, a userinterface which offers intelligent userguidance and an
extended evaluation program library are incorporated in the HTMDB. They enable the user
to easily retrieve material data coming from different pools and evaluate the data on their
relevance and quality before transferring them into the fixed data lists and calculations of the
KBS as shown in Figure 3 for the SP 249 programme. Within a Replication client/server
application (see Figure 7) the customer has also access to all released data which are
regularly updated.
A database management system requires a query language to enable users to access data. Structured Query
Language (SQL pronounced 'sequel') is the language used by most relational database systems. The SQL
language was developed in a prototype relational database management system System R by IBM in the
mid1970's. Information Technology Database Languages SQL, ISO/EEC 9075: 1992 (E)
12
105
HTM-DB
rrrr
I
validated
query
processor
DBMS
data file
evaluation
programs &
models
user
geometry
query variables
FEM
processor
design loads
constitutive
parameters
constitutive
equations
stress-strain
damage
finite
element
model
(FEM)
structural
design
life
projection
Figure 9: Conceptual scheme for linking of a material properties databank with stressstrain-life analysis by finite-element computation
An expert using a KBS needs fast solutions for his problems especially if damage or failure is
recognised on critical components. Then often an inelastic analysis for re-design of the
component and for definition of new inspection intervals is necessary to continue the
operation. If, for instance, in nuclear power plants, crack propagation is detected at a high
stress and temperature exposed weldment of a critical component, a fast analysis is requested
to decide whether the weldment can be repaired or not. In conventional power plants, the
inelastic analysis can improve e. g. the assessment of creep-crack growth and/or relaxation
effects. Therefore a databank which is linked to a KBS needs to contain the corresponding
data and to allow the data access at the same speed as the system itself.
Two years ago this high-speed response could not yet be provided by the HTM-DB, neither
from a PC/workstation client/server system nor from a standalone-PC which is used for onsite plant conditions. Due to the hard- & software conditions the data access was too slow. In
the meantime these conditions have changed. Instead of about 4 minutes (2 years ago)
nowadays a Pentium PC needs 10 seconds to retrieve the same HTM-DB data content from
the local PC database (DBMS: SQL*Base). Similar speed of access is given for the
evaluation of the data. A Larson-Miller extrapolation within the HTM-DB (Over, Krckel,
Guttmann, Fattori, 1993) for which the data must be transferred from the data retrieval part
of the User-Interface to Microsoft Excel is nowadays a task achievable in the order of
seconds.
The dynamic link between the KBS and the HTM-DB is already made as shown in Figure 3.
Data which are retrieved and evaluated in the HTM-DB are transferred to the KBS-"internal"
SP 249 databank and entered into the pre-arranged data sheets.
6.
Conclusion
Using the HTM-DB in combination with KBS-based plant life management for the power
plant industry from the planning phase up to operation, it can be fed with data from
acceptance tests and additional mechanical testing of the respective power plant components,
from similar components of other power plants and from safety experiments. It will therefore
provide the main material data input to the KBS. Recent hardware technology allows to
match the response speed of the KBS.
In a situation where the speed of the decision process is vital, other than computerised
methods are becoming inadequate, both for component life assessment and material data
13
106
retrieval & evaluation. Any hard & software costs are by far outweighed by the savings in
operation time and plant availability.
The development of an architecture for this integrated system is well advanced, and the
functional demonstration is the next goal.
7.
Literature
ACT (1992). Advanced Computer Technology Conference 1992, held in Phoenix, Arizona,
December 911, 1992, Proceedings, Vol. 1 and 2, published by EPRI Palo Alto, US,
December 1992
Dooley, R. B., McNaughton, W. P., Viswanathan, R. (1992). Life extension and component
condition assessment in the United States, Proceedings of the VGB Conference on
Assessment of Residual Service Life, July 67, 1992, Mannheim, Germany
HTR Regelwerk (1984): "Erarbeitung von Grundlagen zu einem Regelwerk ber die
Auslegung von HTRKomponenten fr Anwendungstemperaturen oberhalb 800C",
Kernforschungsanlage Jlich, Jl Spez 248, ISSN 0343 7639, Mrz 1984,
Jovanovic, ., Friemann, M. (1994). Overall structure and use of SP249 knowledge based
system, Proceeding of the 20th MPA Seminar, vol. 3, MPA Stuttgart
Jovanovic, ., Friemann, M., Kautz, H. R. (1992). Practical Realisation of intelligent inter
process communication in integrated expert systems in materials and structural
engineering. Proc. of the Avignon '92 Conference Expert Systems and their
Applications (Vol. 2Specialised Conferences). Avignon, pp 707718.
Jovanovic, ., Kussmaul, . F., Lucia; A. C, Bonissone, P., Eds. (1989). Expert system in
structural safety assessment, Leet. Notes in Engineering, vol. 53, SpringerVerlag
Krckel ., Westbrook J. . (1987). 'Computerised materialsinformation systems', Phil.
Trans R. Soc. London A 322, 373391
Nickel H , Schubert F., Penkalla H. J., Over H. H. (1983). "Mechanical Design Methods for
High Temperature Reactor Components", Nuclear Engineering and Design 76 (1983)
197206
Over H. H , De Luca D. (1993). "Intelligent User Guidance for the HTMDB", 12th
International Conference on Structural Mechanics in Reactor Technology, Post
Conference, Seminar No. 13, 'Knowledgebased (Expert) System Applications in
Power Plant, Process Plant and Structural Engineering', Konstanz, Germany, August
2325, 1993
Over H. H , Krckel H., Guttmann V., Fattori H. (1993). "Data Management with the High
Temperature Materials Databank", 12th International Conference on Structural
Mechanics in Reactor Technology, Stuttgart, Germany, August 2325, 1993
8.
Glossary
14
107
15
109
MIHAEL GRUDEN
TECHNOLOGY AWARENESS DISSEMINATION IN EASTERN
EUROPE WITH INTELLIGENT COMPUTER SYSTEMS FOR
REMAINING POWER PLANT LIFE ASSESSMENT EUROPEAN UNION
PROJECT TINCA
Abstract
Partners from Russia, Hungary and Slovenia are preparing an advanced intelligent computer
system together with MPA Stuttgart. The scope of the European Union project is to
disseminate the advanced technologies for remaining life assessment of power plant
components to Eastern Europe. The power plants in Eastern Europe are critically in demand of
such assessment since decisions must be met to prolong the operating life of the plants and
propose environmental solutions to the plants where applicable. A knowledge and experience
exchange will provide a case and data base for materials and practices used in Eastern Europe
for the future benefit of Eastern and Western European participants.
1. Introduction
The changes in the Eastern European countries have a long- term orientation towards a market
economy. The previously strongly centrally planed utility had effect on the state of component
life assessment in the utilities. The utilities now run on low budget programs where no life
assessment nor any serious maintenance is performed. In the best cases, the power plants
received planed material, replacement parts and work-force to enforce component repair
needed or not.
All the power plant engineers are now facing decisions to keep their plants in operation.
To this dramatic situation the growing respects of environmental requirements seem to ad
spice. Many older plants in Eastern Europe have insufficient equipment to operate within
prescribed local pollution limits. Power plants are forced to bum inadequate fuel, violating the
pollution standards and operating procedures prescribed by the equipment manufacturers. The
minimal safety requirements to their operating staff and local residents are neglected.
The failure of high temperature pressurized components are a critical issue. The component
lifetime assessment (CLA) of power plant systems is a vital activity for the engineers,
maintenance and operating staff. The plant engineers are confronted with the responsibility of
deciding what to do with the high temperature pressurized component and/or the plant itself
(for example, stop and re-inspect, reduce load, replace, etc.). Improvements of the methods
and procedures used for the assessment and management of remaining life, reliability and safety
of high temperature pressurized components is therefore extremely important. The level of
practice varies from country to country. Where many of the power plants were built by western
companies under license or in cooperation the level of CLA reflects the minimal normal level to
the country of origin at the time of commissioning.
On the power plant level of the utilities participating two benefits may be expected:
Extension of plant service life and
Reduction of maintenance cost.
These two goals can be met with a high cost low risk approach opposed to the highrisk
low cost approach possible with the limited funds in the new economies of the East European
110
Utilities. Commonsense compromise solutions can be founded on modern CLA and life
expectations estimate methods. Detailed benefit analyses give additional allowance:
Reduction of energy production cost,
Improved power plant safety,
Reduced environmental damage,
Standard plant component life assessment practice.
All these activities in the Western expert community are today more and more often supported
by complex expert or knowledge-based systems.
Much of the recent research effort in Europe, USA and in Japan has been devoted the
development of such systems.
Examples of important current developments at the (west) European level are carried out by
companies:
ESR international (Expert System for Remaining life assessment), ESR VGB, (members of the
VGB German Technical Association of Large Power Plant Operators) and SP-249 System.
In Eastern Europe these modern tools are actually unknown and hardly used in an appropriate
magnitude. The systems available are therefore not used in Eastern Europe in spite of huge
needs, caused by out dated technologies and maintenance concepts.
MPA offered to coordinate and guide partners from some East European countries to
participate and form a new East Europe oriented software expert system with the shortened
name TINCA derived from it's long official name: Enhancing Technological awareness and
technology transfer in the area of advanced INtelligent Computer systems for the Assessment
of the remaining life, reliability and safety of power plant components.
Ill
These technical notes establish the expertise foundation for interested parties to assess the
possibilities and benefits for the use of the existing knowledge-based systems for a particular
application.
2.1.3 Establishment of information booths
MPA and the partners in Eastern Europe gain admittance for the Eastern European users to
the program systems. In order to achieve this goal the partners organize or establish contact
booths in Eastern Europe. Every project partner from Eastern Europe will take care of this
task in his area and will become a technology dissemination center for the spreading of the
advanced software systems in Eastern Europe. The project partners from Eastern Europe are
trained on the knowledge-based systems by MPA Stuttgart in the subject of installation,
operation and application, maintenance and update procedures. After that, the partners can
perform the training for other interested parties in their country, area, region...
2.2 Adaptation of software modules to the special needs in Eastern Europe
An essential emphasis is the adaptation of the existing software packages to the particular
conditions in Eastern Europe. The concept is to develop several additive modules, which can
be linked into existing knowledge-based systems. For example if an interested party buys ESL
International System, it can add these modules for the possibility of a direct comparison of
western and eastern standards, guidelines, data and methods.
The base organization of the expert software will be adapted, but common software will enable
widespread use by partners without candid hardware.
2.2.1 Database modules
Different materials used in power plants in Eastern Europe require the development of new
database modules containing relevant data of these materials. The database with materials from
eastern Europe will allow comparison of eastern and western materials and their properties.
2.2.2 Hypermedia modules
The system should allow the comparison of Eastern European and advanced (West European)
codes, standards, methods and procedures that should be integrated in the system. Referring to
the existing guidelines and standards in Eastern Europe, additional software modules in
hypermedia format have to be developed, to be qualified and integrated with the existing
software. Furthermore it is necessary to integrate typical case studies from power plants in
Eastern Europe to the software system.
2.2.3 Pilot System for Eastern Europe
A pilot expert system for Eastern Europe will be created including additional modules with
special features for explaining and demonstrating the expert system for interested parties in
Eastern Europe
The demonstrator tutor is also designed to tackle the languages of the partners in Eastern
Europe (multiple-language add-on module) so that the language gap to the plant engineers in
practice is closely bridged.
2.3 Technology transfer to Eastern Europe
Methods of technology transfer can be adapted to the existing level of knowledge at each
partner. The level of translations to the native language necessary for the instructors in order to
receive a proper level of understanding will be managed with modern programming techniques.
2.3.1 Preparation of seminar program
The partners of Eastern Europe will take the role as multipliers for their countries to
disseminate the information and to consult interested parties. Therefore MPA Stuttgart will
prepare a seminar program together with the partners from Eastern Europe to inform plant
engineers about the usability of knowledge-based systems for questions of maintenance.
2.3.2 Organization of seminars
112
Every project partner from Eastern Europe will collect the addresses of the power plants of his
country, organize and carry out at least one seminar together with the interested plant
Each East European partner (KORONA, MISKOLC, ERKAR, LENERGOREMONT) has
sent an engineer responsible for preparation of local seminars and other technology transfer
measures to the project coordinator (MPA) for at least 8 to 12 months. These steps would take
place between month 3 and month 18 of the project.
3. Activity plan
Task
Description
1. year Preparation and dissemination of information
1.1 Leaflet
Select and prepare information,
Set up leaflet
notes Select and prepare information,
1.2 Technical
and special reports Perform notes and reports
1.3 Establish contact Questions of software licenses, software
booths
maintenance, updates,
Consulting of interest parties,
113
5. First results
The first efforts were made in elaborating the complete task and schedule for the first and
second year activities. In the field of software development, communication, data base
exchange and case studies' analysis have shown more problems than expected. The basic
approach to the opening structure is shown in Figure 1.
Reflecting these needs the work started compiling the databases:
1. Material databases: chemical composition, physical properties, temperature test
data, structural data, etc.,
114
2. Standard materials from domestic steel producers including cross reference with
similar western materials,
3. Design: standards, procedures, codes, guidelines used in the partner's area,
4. RLA methods: standards, procedures, codes, guidelines used in the partner's area,
5. Inspection planing methods and guidelines.
Database
D a t a b a s e Mode
All Countries
lop-Bot t oro
Country
U titie>
AlUtaic
Pianti
Plants
9
Block!
Alt Blocs.*
Bade io Main
115
Chemical Composition
Material Name
Material Family
13CRM044
Ferritic Steels
Material Subfamily
Material Group
Country
Country Code
10
Number
Standard
SI
Slovenia
33
Ravne
1.7335
Note/Scope
Steel, resistant at temperatures increased up to 500 oC. tubes, component parts for steam
Main Menue
H4
Exit
MATERIAL INFO
Delete
Add
Country Code
HUN
Location
Strat Class
Points Value
1087 Budapest
Phone 1
<36>(1 ] 1 3 3 1
Phone 2 <36>[1]133 8
0
Fax
<36>(1)269 9
The Centre has been jointly established by the CEC and the Government of Hungary, as a foundation.
Purpose: to strenghten the co-operation within Hungary and between Hungary and the EC in the field of
energy management, in particular energy conservation and energy efficiency, covering areas such as
training, education and information. TECHNOLOGY TRANSFER, organisation of exchanges of experiences
in energy planning and forecasting. In addition the Centre has since Nov 1992 taken on responsibility for the
THERMIE technology dissemination program in Hungary.
It is strongly recommended that the Centre
should be approached immediately in order to establish a link with Hungarian programmes for power plant
engineering. While immediate profits might be low. sending experts from the Consortium could provide a
decisive strategic advantage. Being European-minded. Hungarians should be encouraged to participate in
the SPRINT idea by not letting in the US or other competitors...
Md
Delete
HHJmutiifelHH
Select Utility
Power Plant
Back to Database
Figure 3: Example for a database file window for a Hungarian utility center
116
6. Conclusions
The partners of the TINCA project started our work together in March 1995. The activities are
progressing at pace and the participants are vigorously trying to fulfill all the current
obligations. Gradually the actual field state of the art is emerging. The participants will have
the material for the opening presentation of the expert system in development prepared in time.
More interesting details will be presented at our next meeting and could serve as a base for
future expert system development in other countries.
7. Literature
1. Work documents of the TINCA project 1994/1995
2. Mihael Gruden, Urban Jan: Experience and improvement of power plant operation due to
continuous monitoring of boiler drum life, International conference Life management of
power plants, December 12-14th 1994 Proceedings Edinburgh 1994, UK.
3. Mihael Gruden, Angelo B r { ~ i ~ : The low capital engagement approach to the pollution
control of Termoelektrarna Toplarna Ljubljana Energy and environment, Opatija 1994,
Croatia
117
CHAPTER 3
119
INTRODUCTION
The first phase of component assessment generally comprises a code-based calculation of life
consumption and a review of operating and maintenance history. Should the component prove
unacceptable on any of the indicated criteria, then the assessor should move to a hands-on
inspection, involving conventional NDE methods and various metallographic techniques. This paper
outlines approaches available to assess the levels of damage and degradation present and to
determine the remaining life of the component. The particular methods required to predict the
behaviour of any defects found are also described.
The overall philosophy is to identify the physical metallurgical processes that are either directly life
limiting, eg creep cavitation, or are correlated with life consumption, eg thermal softening. For each
process, directly observable indicators are selected that can be used qualitatively or quantitatively in
life prediction. Interactions between processes are also considered.
In the presence of a defect, this philosophy is extended to include the effects of different loading
regimes on initiation, growth and fracture processes.
Components exposed to high temperature conditions under which creep and other time dependent
processes occur, will suffer degradation of their properties over periods of extended service. In low
alloy steels creep damage leading to failure results from
i)
ii)
Both process normally occur simultaneously, the prevalence of either is determined by the initial
structural state and purity, conditions of stress and temperature. Metallographic methods of
component life assessment are designed to generate information on these processes.
Microstructural degradation and corresponding thermal softening of the base metal can result in a
variety of microstructural effects in the steel, such as changes in composition, structure, size and
spacing of carbides; in ferrite composition; in solid-solution strength; and in lattice parameter.
Carbide characteristics, measured by direct observation or indirectly by hardness measurement, have
proved the most sensitive indicator of thermal degradation. Creep damage (cavitation) can be
measured directly.
120
Changes in grain-orientation also occur with strain during elevated temperature service and these can
be monitored by direct observation. However this technique has not yet reached the stage where it is
suitable for routine component assessment
Implementation of the metallographic methods can be done by removal of samples or non
destructive^ -situ' by replication. Although samples can be removed from most components, there
are situations in which -situ' replication may provide the only possible approach to microstructural
evaluation eg when the removal of a sample is geometrically difficult or is liable to affect the integrity
of the component, or when repeated observations are required. The two major applications of
replication techniques are (1) the study of microstructure (creep cavitation, grain size, etc) using
surface replication and optical microscopy, and (2) the examination and identification of small
second-phase particles by extraction replica techniques, as, for example, for the purpose of
interparticle spacing determination.
METALLOGRAPHIC TECHNIQUES
3.1
Surface Replication
Replicas can provide information on the condition of the material from which a component is made.
They are non-destructive and can be taken from any accessible point. They do, however, only
provide data relevant to the surface of the component. Samples extracted for metallographic
examination provide similar information, through the wall thickness, but at a limited number of
positions only.
The methods of interpretation described here apply to both replicas and
metallographic samples. The information obtainable includes:
State of Degradation
Precipitate growth and spheroidization
State of Damage
Extent of creep cavitation and cracking.
Qualitative and quantitative methods of assessment are available and provide information that can be
used directly in life prediction.
3.2
Hardness Measurement
Hardness measurement can provide information on the state of degradation of ferritic steel
components. It is a non-destructive technique that can provide data on any accessible point on the
component surface. Similar data, through the wall thickness, can be obtained from extracted
samples. The information obtainable includes:
*
State of Degradation
Indirect measure of overall precipitate size and spacing
Cross-weld Hardness Differential
Indirect measure of creep strength differences
121
Temperature estimation
Qualitative life prediction
Quantitative life prediction
Weld failure location prediction
The same standard of surface preparation is required for hardness measurement as for replication.
3.3
Carbide extraction replicas can provide information on carbide precipitate particle characteristics,
specifically:
*
*
Carbide spacing
Carbide size and morphology
Carbide composition
Temperature estimation
Qualitative life prediction
4.1
Qualitative Techniques
State of degradation
The microstructure of low alloy, creep resistant steels evolves with time at service temperatures, the
most obvious visual change being the coarsening and spheroidization of the carbide precipitates. This
is shown schematically in Fig.1. The precise evolution is dependent upon the initial, as fabricated,
state. More detailed schemes taking into account both grain boundary and grain interior precipitates,
have been developed for base material and for heat affected zones (Ref.10).
Damage location
Creep damage - cavitation or cracking - must be assessed correctly both for fitness for service
evaluation and for life prediction. In the case of weldments, it is an important part of damage
assessment to determine the microstructural region in which damage occurs. If several regions are
damaged, they should be assessed separately.
The structures occurring in a low-alloy steel weldment are determined by the temperature profile and
can be related to the iron-carbon phase diagram. If the weldment is subsequently renormalised, then
uniform fine-grained structures are produced throughout, with traces of the weld-beads visible on
etching as a consequence of slight differences in chemical composition.
On examination of a replica, the microstructural regions in which damage occurs should be noted and
the orientation, with respect to the weld, and general distribution of the damage recorded, prior to
122
formal quantification. The distance between the damage and the fusion boundary, or similar
unambiguous feature, should be given.
On examination of a sample, the same information should be recorded, together with the
position/variation of damage through thickness or association with the cusp region in the weld.
The location of the damage determines the quantification route to be used.
Damage classification
Various schemes of qualitative damage classification have evolved from the original proposals of
Neubauer (Ref.4). These have attempted to improve precision, increase the applicability to a range
of steels and microstructures and incorporate the effects of other forms of degradation (Ref.5,6).
These may be harmonised as sown in Fig.2 (Ref.11). This is intended to allow direct comparison
between the different schemes and to enable historical data, recorded according to the simpler
methods, to be re-interpreted in line with the newer.
4.2
Quantitative Techniques
Weld metal
Coarse grained HAZ
Parent material
A parameter
Cavity density
An intersected grain boundary is only observed between the first triple point on either side of
the intersection. If the boundary extends beyond the field of view then the point at which it
leaves is treated as the triple point.
Rule 2:
123
Rule 3:
Multiple intersections with the same boundary are each counted and are classified with the
damage state of the whole boundary.
Rule 4:
Intersections with triple points count as one boundary intersection. The classification of
DAMAGED or UNDAMAGED is determined by a 'majority vote' of damage states of the
three joined boundaries.
With reference to Fig.3, Boundaries , and C are DAMAGED according to Rule 2. Similarly,
boundaries D, G and J are UNDAMAGED using the same rule. Boundary J also illustrates the
definition of a boundary in Rule 1 in that it extends only between the first two triple points.
Boundary intersections H and I are both counted, and must have the same damage state (in this case
UNDAMAGED) since they are on the same boundary (Rule 3).
Intersections E and F are examples of triple point intersections classified according to the 'majority
vote' of Rule 4; that is E is damaged and F is not.
If the number of damaged boundaries is N Q and undamaged boundaries Ny then the number
fraction of cavitating boundaries, A, is simply defined as:
Nv+Nc
The length of the traverse (L) should be recorded and the grain size, defined by the mean linear
intercept, calculated:
l = L/(Nu
KO)
In order to achieve the necessary precision in A parameter value, it is usually necessary to count a
minimum of 400 grain boundaries, achieving this by a series of parallel traverses, separated by two
fields of view.
Cavity density
The cavity density is most simply defined as the number of cavities per unit area.
Measurement may be by direct observation or through photographs. Microscope requirements are as
for A parameter determination, with the addition of a camera if photographs are to be taken and a
rectangular grid to allow precise definition of the area observed.
The replica is traversed, in the direction of the maximum principal stress, ensuring that there is no
overlap between successive fields of view. (Small gaps between fields are acceptable). The total
length of the traverse or the sum of the lengths of the fields of view is recorded. For each field of
view, the total number of cavities observed within the field in noted. If there is any doubt as to the
identification of a feature, it is to be ignored. In cases of cavity linkage, clearly identifiable linked
cavities should be counted individually and the fact that linkage has occurred should be noted.
Counting may be by direct observation through the microscope. Alternatively, a photograph of each
field of view may be taken and the cavities counted on an enlarged print. This approach is often
124
more accurate at high cavity densities. To ensure that every cavity is counted once only, their images
on the photograph should be pricked through.
As with the A parameter, determinations of the cavity density for each traverse separately serve as a
check on material and damage homogeneity.
4.3
Life Prediction
=
=
)-{\-)
()-/(-)
with
temperature
stress
strain
damage
Solution of this pair of equations yields a relationship between life fraction, strain fraction and
damage:
\-(tltr)Vtl
=(\-
6rf
=(1-0))^^
where
This equation forms the basis for predicting creep life and time to crack initiation. It is also fully
compatible with creep fracture mechanics (C* type approaches) and can be adapted to include cyclic
creep and creep-fatigue effects. It is necessary to relate the physical measures of creep damage - A
parameter and cavity density - to the state variable, , or the strain fraction, .
125
Theoretical studies yield the following relationships, which have been confirmed experimentally:
A Parameter:
-
S.A
Cavity density:
where "ris the rupture ductility and NF the cavity density at failure.
Calculations based on the A parameter
In the absence of a crack, the following relationship may be used:
LF=
. l-LF
remaining
service \
where
LF
A
6S
= Monkman-Grant parameter =
,/,
tr
{
seme
If the damage is uniform through the section, then this time is the time to failure, if the damage is
localised, this time is the time to crack initiation.
In the presence of a crack, then the relationship
126
{\- ) = \\--\"\
'
defines the residual ductility fraction used in the crack growth rate equation.
Values of the parameters , , Aare dependent on material, stress and temperature.
Calculations based on cavity-density
In the absence of a crack, times to failure (for through section damage) or crack initiation (for local
damage) are given by
LF=[\-{i-NA/NFy]u
w tri
'
Remaining = Service 0~LF) / LP and all constants and variables are defined as before.
In the presence of a crack, the remaining ductility fraction is calculated directly from:
As for the A parameter, lower bound, realistic or probabilistic calculations may be performed.
Calculations based on cavity classification method
An approximate calculation may be made by estimating the A parameter value from the qualitative
cavity classification.
A qualitative assessment of the damage level may be compared with the observed relationship
between 'A' and Neubauer"s classification, and an upper bound 'A' value can be selected. The
maximum "A' value could then be used in any of the 'A'-life fraction equations to yield a suitable life
estimate, (Ref.2).
Alternatively, the damage classification could be related to life fraction directly. Figure 4 gives a plot
of damage classification vs life fraction whence minimum and maximum remanent life fractions (and
hence lives) can be obtained. Ranges for three material states (all ICrViMo steel) are given. These
include a ductile parent material and a coarse grained HAZ material of intermediate ductility (Both
Ref. 10) and a brittle (high impurity content) coarse HAZ (Ref.2).
127
5.1
The creep strength and hardness of ferritic steels are essentially controlled by the same
microstructural process. The materials deform plastically at ambient and elevated temperatures by
the movement of dislocations through the ferrite crystal matrix. Hardness and creep strength are both
a measure of the resistance to this movement offered by the matrix dispersion of alloy carbides
(typically vanadium, chromium, molybdenum carbides). In principle, therefore, it should be possible
to estimate creep strength and therefore expired and remaining lives from a measure of surface
hardness. In practice several approaches have been developed.
The hardness values measured can be used in a variety of ways:
as a means of identifying critical component regions where hardness is markedly different from
that which should be expected for a satisfactory material, eg overheated regions, improperly heat
treat components
in combination with calculational assessment of remaining life and creep damage quantification,
allowing improved predictive accuracy and wider coverage of the component
as a quantitative measure of microstructural degradation for input to base material and weldment
creep models.
5.2
Temperature estimation
The strength of low-alloy steel changes with service exposure in a time and temperature dependent
manner. Thus, any measure of change in strength during service (eg change in hardness) may be
used to estimate a "Mean" operating temperature for the component. This approach is particularly
suitable when strength changes in service occur primarily as a result of carbide precipitation and
growth (microstructural coarsening). Strain-induced softening can often be neglected for the low
strains involved in plant.
The tempering responses of steels at typical service temperatures, as evidenced by hardness
changes influenced by time (t) and temperature (T) of exposure, can be described by the SherbyDom Parameter, log(t) - (q/T), where is in K. A correlation between hardness and the SherbyDom parameter can be obtained by ageing a given material, with initial hardness H0 (at t = 0), at
temperature T, and measuring the change in hardness as a function of time t. The resulting
relationship is = f(P). The curve, however, is unique to the starting material condition represented
by the initial hardness . Figure 5 is a schematic illustration showing a typical experimentally
derived = f(P) correlation obtained on 2 l C r M o material having an initial hardness of H0 = 190
(Ref.7).
Assuming that hardness is inversely related to interparticle spacing, a formal description of these
ageing curves can be defined, by analogy with Lifschitz-Slyozov-Wagner-Greenwood coarsening
kinetics:
(Ht-Hss)-'
=(H0-HSS)-'
+C0exp=^t
128
where is the saturation (solid solution) hardness level. The temperature dependence of the
Sherby-Dom parameter is thus
q = Q/(R-\n\6)
where R is the gas constant and Q is related to the self-diffusion activation energy. These
relationships may be used to predict future softening trends or to determine mean temperature if two
successive hardness measurments are available. (In some cases, the hardness difference between
'hot' and 'cold' regions of a component may be used).
5.3
Life prediction
Due to the extensive post exposure stress programmes that have been earned out over recent years,
databases relating materials hardness empirically to rupture life are becoming available. However
within these databases, currently no compensation has been made for possible variations in heat
treatment or other process variables and therefore a wide scatterband in predicted rupture life
capability exists. A lower bound fit to these data is therefore generally adopted. Nevertheless despite
the limitations, the data already constitute a useful indicator of minimum remaining life capability,
based on hardness measurement.
In terms of application, if measured component parent material hardness values indicate minimum
remaining life in excess of the target life, then no further refinement is required at this stage,
however, if hardness values suggest the converse, then refinement to the analysis using quantitative
methods or accelerated post-exposure testing should be considered.
Figure 6 gives the data currently available for 21/iCrMo steels. The rupture life axis is temperature
compensated, the hardness axis is stress compensated. For known operating conditions - stress and
temperature - the measured hardness of the material can be used to generate a range of predicted
life. Typically the scatter is a factor of 3 smaller than that obtained using standard materials data
only.
Life can also be estimated from the qualitative degradation class. Figure 7 shows the relationship
between degree of spheroidisation and life fraction for the same three materials as were included in
Fig.4. It is immediately apparent that for the most ductile material, degradation class is the more
sensitive indicator of life consumption, whilst for the most brittle material, damage give the better
prediction. Most importantly, it is clear that for intermediate ductility materials, both factors need to
betaken into account.
5.4
Using the data of Fig.6, it is possible to construct a weld predictor diagram (Fig.8). This shows weld
metal hardness against parent metal hardness and two lines corresponding to equal rupture strengths
for sub-critically stress relieved welds and for fully renormalised welds (Ref.8).
Plotting a point to show the current hardnesses of a weldment allows prediction of failure location to
be made. Above the relevant line, parent material failure is expected. Below it, weld metal failure is
expected.
129
It is possible for the hardnesses of weld and parent material to reduce at different rates with service
exposure, causing the plotted point for the weld to cross the line, giving a transition in failure location
with service life. This approach is currently being extended to include Type IV failures.
6.1
Temperature estimation
Methods of temperature estimation analogous to those used for hardness measurements have been
proposed, and have met with some success. Methods of time-temperature estimation based on
carbide composition and morphology (Ref.9) are also available.
6.2
Life prediction
A mechanistic model based method of quantitative life prediction has been established on the
following principles.
The presence of carbide precipitates was postulated to result in a 'Threshold' stress which must be
exceeded to allow dislocations to climb over the particles so that
'
where a' is a constant, is the shear modulus, b is the Burgers vector, and is the mean
interparticle spacing. The creep-rate equation under the effective stress can be written as
where is the applied stress and is the constant containing the temperature dependence, defined
as
B=
B0txV(kAT)
)=X0+C0exp(T)t
where Xt is the instantaneous interparticle spacing at time t, 0 is the spacing at t = 0, is
temperature in K, and C0 and are constants. Thus:
=B0exp(kAT)x
'
h\ + C0 exp(/?7>r
130
By substituting values of B0,kA, , a',0,C0, and , this equation can be integrated between limits
of t = 0 and t = t, and the strain accumulated up to that time can be determined. Because the creep
rate is known, the failure time tr at any arbitrarily selected value of failure strain can be calculated.
Using the above model, reasonable agreement has been demonstrated between rupture life
predictions from precipitate size and actual rupture lives determined by experiment.
The model is based on the premise that once the kinetics of carbide growth are known, the creep rate
and hence rupture life an be calculated. The initial carbide spacing 0 is usually unknown.
Therefore, monitoring of the carbide spacing , as a function of time or at different locations of
known temperature is necessary in order to determine the carbide-growth kinetics.
For application of the model to a field component, ideally samples or replicas from three different
temperatures should be removed and the carbide spacing t measured. From these values to
constants 0, C0, and in the carbide coarsening kinetics equation can be determined. The service
applied stress and the local temperature where remaining life estimates are to be made should be
known. Values for A,k., and have to be assumed. All these values are substituted into the
O '
A '
above equation to compute a creep curve for the material. From the creep curve, the time to reach a
given critical strain or the time to rupture can be estimated.
The application of the carbide-coarsening model at present has numerous limitations. Carbide
distributions in steels are non-homogeneous and the starting microstructures for different components
are never the same. Therefore it is inevitable that the carbide coarsening kinetics specific to the
component must be determined by taking samples or replicas from locations of known temperature.
This is difficult to achieve in practice, since local temperature measurements in components are
rarely made. Further, if the temperatures and stresses are known, the expended life fraction can be
calculated directly instead of using the carbide-coarsening model, and the answers are expected to be
at least as accurate, if not more so.
Even after the carbide coarsening kinetics for the particular cast of the component under examination
have been determined, the other constants needed, such as B0,kA,n,
and a', still have to be
assumed using bounding values of data obtained on other heats. Further, the failure criterion
assumed in terms of a critical strain is arbitrary. The carbide coarsening model thus contains many
constants which are difficult to obtain and evaluate.
Further, the practical application of the technique is difficult. For example, where samples cannot be
removed from the component, in-situ carbide extraction techniques have to be employed which are
difficult in plant situations. Additionally measurement of carbide spacing from extraction replicas is
extremely subjective, requires significant time commitment to achieve a representative measurement
and generally gives limited reproducibility.
HARMONISATION OF RESULTS
131
(Ref.5,6). However, at present these modifications seem very limited in their applicability beyond the
particular classes of material and component design on which they were derived. More extensive
schemes (Ref.10), have been developed elsewhere, but these are subject to the same limitations.
It is considered, therefore, that a realistic future expectation is the development of an integrated
metallographic approach which correctly balances the influences of time, temperature and stress on
softening and strain and damage accumulation.
As a preliminary move towards such an integration, the data of Figs.4 and 7 have been combined in
Fig.9 to generate a damage degradation map. This showsthe evolution of the three materials
considered in terms of damage and degradation class and contrasts damage only, degradation only
and mixed behaviour.
On this map, life fraction contours - interpolated from the source data in Refs.2, 10 - have been
superimposed. These show, for typical service conditions, how damage and degradation processes
interact to control creep life.
DEFECTS
No material or structure is free from defects, nor immune to their formation. Ongoing improvements
in non-destructive examination techniques have provided the means to locate, characterize, size and
monitor defects such that it is now realistic to formulate rigorous procedures for their assessment.
Such procedures give a firm basis for run, repair, replace decisions and for defining inspection scope,
frequency and precision. They reflect current standards (Refs.12-16) and ongoing research worldwide
(Refs.17-19).
The procedure described here addresses the assessment of defects - either actual or postulated - in
components operating at elevated temperatures. It includes treatment of crack initiation and growth
under creep, fatigue and creep-fatigue.
The principles of each stage of the assessment process are outlined and detailed calculation
procedures are given. Throughout, the emphasis is on achieving an efficient compromise between
accuracy and simplicity.
The procedure covers the following aspects of defect analysis.
Failure Process
Global deterioration
embrittlement
ageing
creep damage
Crack initiation
by creep, fatigue and creep-fatigue
from manufacturing/fabrication defects
from accumulated damage
132
Crack growth
by creep, fatigue and creep fatigue
interaction with ligament damage
Failure criteria
leak-before-break
Materials
The procedure is applicable to ferritic and austenitic steels for which long term creep rupture and
ductility data are available, together with some fatigue data.
Components
The procedure covers components subject to steady mechanical and cyclic thermal or mechanical
loading, at elevated temperatures in or below the creep range.
At present it is restricted to components subject to 'global shakedown', that is, regions experiencing
cyclic plasticity are sufficiently small that the overall instantaneous load-deformation behaviour of the
structure is linear.
9.1
Cause of Cracking
Prior to performing the calculational defect assessment, the most likely cause of cracking should be
identified. This will be based upon the findings of conventional non-destructive examination (NDE),
which should indicate the size, form and location of the defect(s), local metallographic examination
(especially surface replication and hardness measurement), to characterize the general material
condition and any damage local to the cracking, and visual inspection - including dimensional checks
- to define the general component conditions.
Particular situations that may be discovered include:
Evidence of stress corrosion or environmentally assisted cracking. In this case further advice
should be sought before proceeding.
Evidence of overheating, e.g. distortion plus excessive material degradation. If this is local, then
a repair may be the most cost effective solution. In any case the cause should be rectified.
This
133
Evidence of a general end-of-life situation, e.g. general degradation and/or damage in the
component, sometimes with excessive deformation. Care should be taken to use appropriate
materials data if proceeding with an assessment in such cases. Such components should only be
kept in service with cracks for a short time, until repair or replacement can be effected. This
procedure may be used to underwrite such operation.
Evidence of a local end-of-life situation, e.g. degradation and/or damage or fatigue cracking local
to a stress raising feature.
9.2
Operating Conditions
Loading and temperature histories are required for the total assessment period, past and future.
Sensible assumptions regarding future operation should be made.
Normal temperature variation during operation can be accommodated by calculating an effective
temperature for the life limiting process. Cyclic operation and start-up transients are included in the
fatigue analysis. Major changes - of long term duration - in operating temperature can be dealt with
by noting that a general time-temperature equivalence can be established for creep dominated
process.
All applied stresses should be categorized as either primary (in equilibrium with external loads - e.g.,
mechanical) or secondary (in internal equilibrium - e.g. thermal and residual).
Account should be taken of the results of previous code-based calculations which should generate
estimates of steady state stresses, transient stresses and life fractions consumed to date by creep
and fatigue.
Previous code-based calculations will have divided the service history into periods of steady-state
operation, each characterized by a stress and temperature, and identified distinct categories of
service cycle, each characterized by heating/cooling rates and pressure and thermal stress ranges.
This information can be used directly in the defect assessment.
9.3
Crack Parameters
volumetric
planar
point
134
In general, defects found during service are conservatively assumed to have existed from the start of
service at the same size as when discovered.
An accurate measure of crack size - in terms of length and through thickness depth - is required
together with as much information on the position and geometry of the defect as is available.
The generally irregular shape of a defect is idealised to an ellipse of axes 2a, 2c, based upon the
information available from the NDE data. If the defect is not aligned with a plane of principal stress,
then it should be projected onto the three principal planes and the stress intensity factors and
reference stress calculated for each plane. The assessment should be based upon the projection
onto the plane giving the highest values for these parameters. Further advice should be sought if:
the defect is at an angle of >20 to this plane
there is less than 20% difference in either of these parameters between two planes
the highest stress intensity and the highest reference stress lie on different planes
one of the principal stresses is significantly compressive (i.e. the second in magnitude)
Interactions between defects should be accounted for. In general, the effective dimensions after
interaction are those of the overall containment rectangle.
If there are multiple defects, interactions may need to be considered iteratively.
9.4
Stress Analysis
The relevant stresses are those which would exist in the neighbourhood of the defect if the
component were uncracked. Stress intensification factors are calculated within the procedure itself.
Stresses should be classified as:
Primary
Secondary -
Peak
Initially, code-derived stresses are used. When greater accuracy is needed, simplified inelastic
methods are used where possible, shakedown analysis being preferred.
Alternatively, elastic analysis may be performed, with the results corrected for plasticity. Neuber's
method is commonly applied.
135
Initial elastic and creep redistributed stresses are required - at the critical point(s) for initiation and
through the structure for crack growth. The timescale for redistribution should also be determined,
from creep/relaxation data.
For fatigue and creep-fatigue assessment, typical operational cycles should be analysed and - using
creep/relaxation and (cyclic) stress strain data - hysteresis loops derived. From these the stress and
strain ranges may be obtained.
10
An understanding of materials behaviour improves, unified models of creep and plasticity are being
derived. This procedure is formulated to use these approaches where possible, thus allowing
consistent description of flow and creep strengths, rupture lives and ductilities, damage and
hardening. Potentially, complete integration of these with the fatigue models is possible. Such
approaches have great value where raw data are in short supply. In the absence of data appropriate
to such models, most of the information required for defect assessment can be obtained or estimated
from standard data tables. In many cases, simple approximations are also available (Ref.13). These
can be used if no better information is currently available. They are also suitable for a preliminary,
simplified defect assessment.
=0
'
0 ;
(-)~"
\~
\sj
\~
(1- \ej
exp{-Qc/RT)
exp(z) / RT)
0(/0)"
So
activation energies
QOQD
, ,,
Mexp(-Qc/RT)
exponents
136
and
(
Cr min
"
. =
r
Cmin
. tr. A
r
= 0 " In (/ 3600 + 1)
where
Standard data
If the minimum creep rate w, is derived, as above, for the initial stress, , then
Aa='min At
where E is the tensile modulus
137
Ordinarily a
Unified model
This is a simple inversion of the creep equation:
For load control:
\-
=,
-\
{l-)"exp(Qc/RT)
0
For strain control:
= 0(\-)
'f f
l/n
\^
~^'
exp(Oc / RT)
or
Standard data
Tensile data are directly o btainable fro m standard data which pro vide stress-strain data including
yield (o r 0.2% pro o f stress) and ultimate tensile strength. These tables give minimum values. A
realistic estimate of the ultimate tensile strength may be obtained from the hardness of the material.
138
a = A(C*Y
where A is a function of creep ductility
and
0.003/fr
n/(n+1)
= C(AK)m
y
dN
where
(consistent units)
m = 3
may be used for ferritic and austenitic steels.
139
Temperature C
300-380
Mean
164
Toug mess
Lower Bound
99
Material
Range
Si killed C,
CMn steels
300-380
196
146
2%CrMo steel
100-500
150
100
Al killed C,
CMn steels
Low alloy
steels
Wrought AISI
300-600
140
105
300 series
11
140
12
LIFE PREDICTION
The time to failure by global deterioration is first calculated, as this may be life limiting. The total life
due to crack initiation and growth, to the fast fracture limit, is then determined. Comparison of these
timescales gives the overall life of the structure (Fig.11).
Consideration of the sensitivity of the defects to overloads is required, as this may impose the
effective limit to operation.
141
13
ACKNOWLEDGMENTS
This paper is published with the permission of ERA Technology Ltd. It represents not merely the work
and experience of the authors but also the efforts of many direct collaborators and other workers in
the field - as the references show. Particularly, much of the recent consolidation of the approaches
discussed has taken place within the EU sponsored SPRINT Project SP249. The authors give their
special thanks to all the partners in that project.
14
REFERENCES
Shammas M.
Metallographic methods for predicting the remanent life of ferritic coarse-grained weld heat
affected zones subject to creep cavitation
Proc Int Conf "Life Assessment and Life Extension', VGB-EPRI-KEMA-CRIEPI
The Hague 1988
Neubauer .
Bewertung der Restlebensdauer zeitstandbeanspruchter
Gefugeuntersuchungen
3R International 19, 1980, H11 pp628-33
VGB Technical Report VGB-TW-507 Guideline for the Assessment of Microstructure and
Damage Development of Creep Exposed Materials for Pipes and Boiler Components
VGB Essen 1992
142
10
11
12
13
14
15
16
17
18
Riedel, H
Fracture at High Temperatures
Springer-Verlag, Berlin 1987
19
143
Ferrite and
pear lite
lamellar
Intermediate stage of
spheroidisation, pear li te
started to spheroidise
but lamellae still evident
Spheroidisation complete
but carbides still grouped
in their original pearlitic
grains
no
no
damage cavitation
A
0
1
0
isolated
cavitation
2.1
2.2
2.3
orientated
cavitation
C
3.1
3.2
microcracks
D
3.3
4.1
4.2
4.3
micro
cracks
5
t
2a
0/1
2b
1
3a
1/2
3b
2
4
2/3 ,' 3 ,' 3/4
5
4
4^
Ui
4-
(/)
ro
3 --
ro
--Ductile
Brittle
ro
Q
I *
0
0
10
20
30
40
50
60
70
Life fraction, %
Fig.4: Relationship between damage and life fraction
80
90
100
147
INITIAL
M*R.DWLSS
BO
V *^9&- 4-
On ,
IbO
HV20
150
130
120
D
AG
e 'f
A
16
-15
-Ib
-17
LOCJ
t -
-14.
l(=320
-13
+
X
o
NORMALISED
A N D
0
TLMPE-RE.D
ZfyCr-Ma
550 C
575* C
feCO'C
fa2bc
feSo'C
(o75C
iooe
7 2 5 "C
150C
148
-21
-23
Parent metal
Weld metal
-25
.1
.3
.5
Stress/Hardness
Fig.6:
Normalised stress-rupture plot for 2%CrMo parent and weld material, compensated for
hardness
ro 4 --
o
4^
ro
o
ro
^"Ductile
^Intermediate
A Brittle
j _
>
0) 2
Q
A
1 -i !'*
0
0
10
20
30
40
50
60
70
80
90
Life fraction, %
100
150
25Q
HV
weld or
CGHAZ
225-
200
175
Weld/HAZ Failures
HV Parent
150
175
4--
(
0)
ro
0)
O)
ro
2
ro
Q
1 --
0
0
Degradation Class
Fig.9: Damage and degradation interactions
152
Brittle fracture
>"Assessment line
1.0
Structurally
disallowed
D.8
j - 0.6 h
0.4 -
0.2
L
Plastic
collapse
153
t=0
t t,
Cracking blunting
t=t.
t>t
CO
A:
o
cc
crack initiation
crack growth
reduction in critical crack size with global deterioration
remaining life of the ligament ahead of the crack,
as a function of crack size and material degradation
time to fast fracture
154
1. Introduction
Following a proposal of 13 European partners, namely: AZT (Allianz.), Ismaning, FR
Germany, EdF, Paris, France, EDP Lisbon, Portugal, Endesa, Ponferrada, Spain, ERA
Technology, Leatherhead, UK, ESB Dublin, Ireland, GKM Mannheim, FR Germany, ISQ
Lisbon, Portugal, , Vantaa, Finland, Laborelec, Linkebeek, Belgium, MPA Stuttgart, FR
Germany, Tecnatom, Madrid, Spain and VTT, Espoo, Finland, under the coordination of
MPA Stuttgart, a SPRINT Specific Project (designated SP249) has been approved and is
currently running (1993-95).
Overall, main goal of SP249 has been to enhance the transfer of component life assessment
(CLA) technology for high-temperature components of fossil fuel power plants, assuring
diffusion of modem state-of-the-art plant CLA technology among power plant utilities and
research organizations in Europe. The project addresses pressure parts operating at elevated
temperature (in creep and creep-fatigue regime) in fossil power plants (Brear, Jovanovic,
1992).
W
Figure 1: Two basic elements of SP249
155
The basic idea of the project organization (Figure 1) is that the knowledge coming from the
power plant should be first summarized in the form of guidelines (paper) and then transferred
into the KBS. The CLA technology coming from different sources will thus be "packed" into a
framework similar to the one used in MPA ESR system (Jovanovic, Maile, 1992).
Main recipients (users) of the SP249 guidelines and KBS will be utilities in Belgium, France,
Finland, Germany, Ireland, Portugal and Spain. The "KBS-supported" use of the guidelines
and the corresponding training of end-users personnel are major issues in the project.
156
guidelines, partly on the experience and heuristic knowledge incorporated into the
CLA guidelines.
b) Recommendation regarding the annual inspection (revision)
c) Damage analysis
Using the system the user is expected to be supported by an "intelligent environment", helping
him to calculate, retrieve data (about material, component, etc.), retrieve necessary standards,
obtain advice and, finally, find an optimized solution for his problem (Figure 2).
5. Strategic goals of SP249 - A European de facto standard
The project has defined the principal levels of the CIA-related problem tree, and in it, as main
causes the "Uneven distribution of CLA technology" and the "Uneven distribution of
experts/resources" (in Europe, for SP249, elsewhere probably, too). This means that there is a
lack of use of advanced (existing!) CIA technology, and it is therefore clear that the project
must address the issues of how the technology can be brought into use at the recipients of the
technology, or, in other words, that the project must address the issue of modem and
successful inter-European technology transfer.
The KBS technology has been identified as a modem and appropriate one (Brear, Jovanovic,
1992) and, in order to allow the incorporation of the CLA technology into the knowledge
based system, a need to consolidate and adapt this technology identified. In other words, it has
been necessary to bring the technology into "computer digestible form". These tasks are
recognized to be considerable exercises, needing frequent review to ensure success, but it is
also realized that there will be a number of spin-off benefits, particularly in the way of
guidelines and procedural documents that will pave the way towards the defacto European
standard desiredfor plant life management (in terms of CLA), leading to:
6. Expected benefits
SP249 will facilitate wider exploitation of CIA technology in the Union, leading to,
environmental and economic benefits. These would include the following ones:
The technology facilitates life extension of aging plant. There is an estimated 4
billion ECU investment in boiler plant in Europe. Taking the significance of critical
high temperature components in retirement decisions, and assuming that 20% of
plant may have its life extended by 10 years a financial benefit to European
Industry of 200 million ECU's per year is estimated. Further financial benefits
accrue due to optimized replacement and refurbishment planning, thereby
maximizing potential of capital investment and reductions of forced outages,
increasing plant efficiency and reliability.
157
Activation o f w * ^
related documents
/ !
SP 249 Generic
Guidelines
and Case Studies
^2
AcDoli
,lc<-o*>.l
mzMssmmBm
1. Introduction
M tantan taitun
Mtmmt
2.NDETMfa~ng
InlUillMUKUitti
niiitnituitttRti
MiiiMtttn
Look up
Look up &
plan
Pari
Pv.l
Par. 5
Per*
Parb
Pire
Vakje 1
12012
231105
211,02
12 03
180 02
BS
VUue 2
Vili* 3 j
321 87 1D:79
75 32
23.53
19.56
65.12
78 85 1 2 * 5
27
97*6
64.12 45.90 j
Rati*
178. M
Calculate
SP249
KBS
\
awaror.
1
Walt-tal 1
MJMMUI2
Matenal3
MMrtiM
MMHU5
M a t n t l l fl
WIM 1
12012
231106
21102
1203
180.02
45 98
Vilu 2
VIQM 3
121 B7 10.79
7832 2 1 5 3
IB 56
65 12
7865 1 2 * 5
73.27
07.45
84 12
45 08
SP249 Generic
Guidelines
Figure 2: "Intelligent" environment for the CLA analysis in SP249
For a single utility company and in the longterm, expected benefits can be seen on an example
(here the Compostilla Plant of Endesa utility company in Spain):
In the long term,
if it is possible to extend the operation life of Compostilla Fossil Power Plant 5
years, and SP249 provides help in achieving this goal, extrasales would be
1312.103 kW 6000 hour/year 1,5 PTA/kWh 5 years = 59.000. IO6 PTA
158
400x IO6 ECU. Endesa has 4 fossil power plants in addition to Compostilla power
plant.
In the short-term
the expected benefits would be decrease the maintenance costs. The total annual
maintenance cost for Compostilla power plant would be (considering a
maintenance cost of 0,30 PTA/kWh, 0,30 PTA/kWh 1312.103 kill 6000 h =
2,360.106 PTA ) 18. IO6 ECU. If it is possible to decrease this cost by 2% using
the SP249, the annual benefits would be 0.02 18.106 ECU = 360,000 ECU, each
year in Compostilla power plant. Again, Endesa has 4 fossil power plants in
addition to Compostilla power plant.
Besides, individual utilities expect to benefit from simplified maintenance/inspection planning
resulting from higher precision component life predictions and from an ability to deploy
precious human resources more effectively. Both are highlighted in utility questionnaire
responses (Brear, Jovanovic, 1992). Optimized component life assessment leads to reduced
risks - of both large scale catastrophic failures and of small scale but extended duration
environmental degradation (use of new sites for new plant, higher emissions etc.). Such
factors, though of great importance, are not easy to quantify.
SP249 KBS
The first version of the system has been produced and distributed to partners in
May 1994. Partners' comments and wishes are currently being implemented, and
bugs eliminated.
Until midterm of the project following modules have been programmed and have
been implemented into the SP249 KBS system:
(Overall structure of the system)
Object management modules
Advance Assessment Route
Case history management (with about 100 case hystories)
Documentation management (with all CLA Generic Guidelines, with
relevant DLN, TRD, ASME, VGB and NT standards)
Material database (with relevant ISO, DIN, BSI, ASTM and other
materials)
A - Parameter Calculation
Hardness Calculation
TULIP (Tube Life Prediction)
Case History Selection and Management
Crack Dating
SP249 Remanent Life Calculation
159
Defect Assessment
Furthermore, an Observer group' of over 25 European and world experts has been established
in order to ensure widespread dissemination of experience gained within SP249.
KAPPA-PC, Guide, MS C++, MS Visual Basic, MS Access. See [3] for details.
MS, Microsoft and Windows are trademarks of Microsoft Corporation
DDE = dynamic data exchange , OLE = object linking and embedding, DLL = dynamic link library
160
specific tasks) linked to the kernel of the system represented by the SP 249 Workbench. This
structure is shown in Figure 3, while the mam tasks of each module are given in Error! Not a
valid bookmark self-reference..
Table 1 : Single modules and their tasks in SP 249 KBS
Task
Module
overview/control of the modules, logging of the session
Workbench
advice for the next action to the user
Advanced
Assessment Route
Documentation Base for background information and on-line documentation
background information for support in decision making
Case Studies
to calculate single results (as input for AAR)
Single Calculations
The modules communicate with the kernel module, called Workbench, mainly via DDE. This
communication contains the main results and other status information.
Data are stored as objects in the SP 249 knowledge base in a hierarchical structure. The
hierarchy of objects containing data relevant for the SP 249 analysis (the "plant objects") is
stored as a sub - structure, having Europe as root. Further levels in the hierarchy are "country"
(e.g. Germany, Spain, etc.), "plant" (e.g. Carregado, GKM, etc.), "block" (e.g. Block No. 1,
Gruppo No.l, etc.), "system" (e.g. main steam pipe, superheater header, etc.), "component"
(e.g. elbow, T-piece, etc.), and "location" (e.g. location n, weld upper side, etc.). The
hierarchy is schematically shown in Figure 4.
Inputs and outputs of the calculations/analyses performed are also handled as objects. These
"calculation/analysis objects" are then attached to various "plant objects". E.g. the remaining
life calculation based on hardness measurements can be performed on a location, the TRD
calculation can be performed on a component. A list of available analyses/calculations is given
in section 10.2.
Single Calculation Modules
Module 1
Module 2
Module 3
4
*
~~
System Kernel *
Workbench
1
!
Module
Intelligent Flowchart
Advanced
" " Assessment
Route
* -
Case study
Selection
Hypertext Environment
Documentation Base
single document
single document
single case
single case I
sin
single
document
single case I
161
Europe
Germany
VEngland
Spain
GKM
<^
Block 01
. Block 10
\ ^Block
, , 15
Main
. steam vvpipe
.
^ - T piece
ERA
Endesa
^ - Y piece
..,....
Weld 01
162
the user needs to care about saving input and results in a file;
2. started from the SP 249 System as a so called "bound" calculation (see Figure 12):
the necessary basic data is passed to the calculation from the system
kernel,
the user does not need to care about data storing,
the result is returned to the kernel, which will use it for further
examination;
3. started from the AAR in the ExpertChart application as a "bound" calculation (Figure
2):
like in the second way,
in addition the result is also returned to the AAR module, which will use
it for further examination and advice.
The modular construction was based on the end-users requests to reflect the way they work.
a) On one hand they have to deliver single calculation results. Therefore they use the
modules like pocket-calculators.
b) On the other hand, using the system as described in the introduction the calculation
modules serve for the higher goal to decide upon "run/repair/replace". Therefore the
coupling of all single modules has been automated.
10.2 Single calculation/analysis modules
Based on the idea of the engineering 'tool box' the object-oriented architecture allows to
introduce single modules. The main task of these modules is the integrated use with full
integration of data and program start-up. The calculation/analysis modules can also be used as
single applications in the MS Windows environment. The modules developed and integrated
into the system are Usted in chapter 7. Here, the TRD inverse design calculation and the 'A'parameter module are described as representative examples. Moreover the SP 249 material
data base as the supporting module for the calculations is described.
10.2.1 TRD inverse design Calculation
The TRD calculation module calculates the life fraction based on inverse design following the
German TRD design code. Creep and fatigue are taken into account. The stress calculation is
possible for straight tube, elbow, T-piece, Y-piece and header geometry. The module uses the
SP 249 material data base (described in section 10.2.3) as an underlying module. p-T tables
can be imported from an on-line data retrieval system. The calculation of the usage due to
cyclic loading is possible in the three different ways, as described in TRD code.
10.2.2 'A' - Parameter Calculation
The 'A'-parameter is defined as the number of fractions of cavitating grain boundaries
encountered in a line parallel to the direction of maximum principal stress. After performing
the measurements with an optical microscope the values have to be typed in into the software.
The software then calculates the remaining life and necessary statistical data, which shows the
accuracy of the measurements. In the absence of a crack the Life fraction LF is calculated, in
163
the presence of a crack the residual ductility fraction, used in the crack growth rate equation,
is calculated. The parameter , and on which lifetime calculation is based are included as
standard values in the program Figure 6 shows the appearance of the module on a large
screen.
File
Edit
View
Data input
Calculation
Results
Window
Help
a i t i s i *I^|IS
VrmVrnf........;
View
window
Spedii
St atlull
IO*
?15
22
mi
Igggggiggiigggiggl^
Help
'
237
aos3
aos3
a 03337
"um
airs'
243
0."
230
D.07B
7"
'Dl 3 "
'ambsi'
Kiosix
fraras
'1
a Ol 351
L pcKJ fandard davialian |SDC ][:
'amase
Abaoluto v a * o 5D(0b*)SDtEit
11H8
:
: :
: : : ; ;
ta.ay.5S
r OQ37S :
:
; : ; . ; . : : : ; : : : :
" .
Conlvfcne bandidti | S D [ E * V * ( T n :
'-::::':':':
G i l d e d D i m i el I n a i A ahi [SDlA Jt
.00(3
U W I m e Calculation
) Ab.*, D Cio
H.[l-(l-f*^J
j$5sg5M
;;:::::*:::::;:'
OPFI
11 fKn !U
'*; P n n l : F . . .
:::::::::::y.
O P M I : nfhr.
**w*tW*
B.0C3
"
:::^;>:;
; : : : : - : * : : ; ; - :
DatcuUtcdVaojoir
^312
./
.*lt
tuiw/?.
HA Z: C u , c oj
O W d d Melat B e .
DAMAGED BOUNDARIES. B. D. F
U NQA.MAGE0 BOU NDARIE 5' A, C . E
y:yj&hipy;cpcuMi*x;: j
10
164
temperature components in power plants, given by standards like ISO, DIN, AFNOR, ASTM,
BS, DIN and others, as well as some data from other sources.
Given that the users of the SP249 material database are subject to different statutory
requirements, the following approach has been adopted. The information from each relevant
summary document has been included into a common format, with blank fields where data are
not provided/available. Since the ISO data provide the closest existing approach to a
consolidated data set, blank fields in the ISO data sheets are filled, where possible, with the
best available data from elsewhere. For convenience and to help comparison, the materials are
grouped into families, classes and subclasses forming a hierarchical structure. All data are
stored in twelve data tables. The contents of the data tables are:
Title and description of the material, source specification, range for which the data are
expected to apply, tensile data, rupture parameters, stress dependency of rupture life
(parameters), stress dependency of rupture life (explicit), rupture strength - creep strength
relationship, average rupture strengths, allowable rupture strengths, creep strengths, physical
property data
Casestudies
The SP249 KBS contains a series (currently 102) of case studies (histories) describing
interesting cases of high temperature component damage and/or life analysis. These case
histories are stored in a format agreed by the project partners. They are managed by the
corresponding case study selection module. The matrix contains typical combinations of
component types and materials. The search of case studies is carried out within the dialog box
shown in Figure 9. A second way of searching is a selection by keywords.
The two "dimensions" (materials/components) are hierarchically structured. A search is started
by selecting an item in the hst of materials and another on the hst of components. All cases
within the selected class and their substructure will be found. The number of entries found is
shown in the upper right corner while the case names are listed there below. The user selects a
single case out of a range of the listed cases, which is then displayed in the hypertext environment.
II
165
Documentation base
Tille tnfoB
Z3ZJTRD
300
? ' DIN
V G B . Guidelines
mm 17155
H R 509 L
U U 508
P U 17175
| 54150
:
:
f e c h n . Agreements
and T5-Reports
mm 301
C2QNordtest
1ASME:
WM SP 249 specific
I
H P
~
l i p
CLA GG 000
CLA GG 002
j
C U GG 004 004 Annex 1
HP
CLA GG 006
|
CLA GG 008
M
CLA Advanced Assessment Route
VTT Inspection criteria of hot pipework
Figure 7
CLA GG 001
CLA GG 003
CLA GG 005
CLA GG 007
CLA GG 009
The usefulness of the system was further enhanced by linking the case studies with AAR. This
is realised by means of keywords, attached to single steps of AAR and keywords attached to
the case studies. The case studies are in this way used as an additional explanation how single
steps in the AAR have to be performed.
Generic Guidelines for Component Life A ssessment GG001
Metallographic Methods:
- Surface Replication
- Hardness Measurement
- Carbide Extraction Replicas
SUMMARY
1
2
3
4
5
6
7
INTRODUCTION
A
ST
A
ND RD PROCEDURE F O R SURFA CE REPLICA TION
INTERPRET
A TION OF SURFA CE REPLICA S
A
H RDNESS MEA SUREMENT
INTERPRET
A TION OF HA RDNESS MEA SUREMENTS
A
ST
A
ND RD PROCEDURE F O R TA KING CA RBIDE EXTRA CTION REPUCA S
THE INTERPRETA TION OF EXTRA CTION REPLICA SI
tft
Main M<fu
Figure 8
: ne*i*nt!><irv S AM
gemuti : ; |
Back
CftrtitftfifS
|Mftd:!:
Quit
12
166
Materials
& f errBc Steels
G 3 C / C M n steels
= B L o w A Boy Steels
0.3% steels
0.5% steels
SSlViCrMoVsteels
& 1 1 % C r steels
O
T11
12
1 Cr Mo Steel (1 'Cr '/A to
32Q steels
& Intermediate Alloy Steels
HighAloy Steels
Q A usternie Steels
Q Other Materials
S3 Components
Q Yield Range
(Creep Range Outside heating
GD Superheater tubing
C3Reheater tubing
Other Components (CROH)
&Creep Range Inside heating
G Superheater Inlet Header
Cn Superheater Outlet Header
QReheater Outlet Header
B M a h Steam Pping
O Bend
C J-1V-Piece
Valve/Cooler
C3 Reheat Steam ttping
3 Other Components (CRIH)
mm^mf^wmm^
Figure 9: Dialog box of Case study matrix
Object
View
Documentation Case
Base
Studies
Material
Database
Open
Edit
Objects
Select
Object
Help
Calculations
167
shows the dialog for editing the object tree. The user can add, rename, delete and edit objects.
If the user wants to analyse an object in the object base, he needs to select it first. He would
therefore use the 'Object Selection' dialog.
12.3 Use of single calculations from the SP 249 Workbench
As described previously the execution of a single calculation in SP 249 KBS also is possible.
The 'Execute' menu as shown in Figure 12 lists the available calculations. If there is an object
selected (displayed in the title bar of the Workbench window, here 'Header Body1), the user
will be asked if he wants to start a calculation for the selected object ("bound") or an
independent calculation. If there is no object selected, the user can only start an "unbound"
calculation. After confirming the "unbound" mode startup, the system will launch the corre
sponding module.
12.4 Module description
The Advanced Assessment Route (AAR) module is the key module in the SP 249 System. It
combines all the calculation modules to an overall advanced assessment route. The AAR itself
supports the user to decide upon the basic goals of the SP 249 System application
(run/rep air/replace).
Irl:
'<RG).WfS} ittft t t t a N b j c
File
Object
^^MwMJMtS^
| 33Europe
">3 Germany
3 J Belgium
Documentation Case
OC3 Spain
Base
Studes a
G) Portugal
Q ISQ
QEcP
Q Carregado Uni 5
>an#o\dai
"CaCarregadoUntS
..IBffJIWHSflBlinS
Q Main Steam Pipe
QCarregado Ur* 7
fr Q Great Botati
Q Wand
G3 France
QFnland
m*SuB*t*A*Qult
Documentation Case
M
Base
Studies De
WM
AAR
Report
Save
Help
Calculations
14
168
15
169
Phase (1-6)
Continue Monitoring
Phase 2
Inspection-based assessment
Access for NDE & Metallography
Phase 3
Repair / Replace / Refine assessment
Inspection results OK OR
NO replacement necessary
Replacement necessary
Phase5
Replacement / retirement Strategie
Phase 4
triplement Monitoring 8
Inspection Strategy
14. Conclusions
The SP249 Project and the development of the SP249 knowledge-based system and its future
deployment in power plants should help in achieving a series of economic and technical
benefits: e.g. unproved availability of systems and plants, shorter and better utilized
maintenance periods, reduced costs of scheduled inspections due to the optimized inspection
strategy- reduced costs of daily operation (specialists called only when necessary), reduced
unplanned costs, or improved possibilities for the life extension of the plants.
Joint effort of CEC and European industries (utility companies and consulting and research
organizations), based on the large scale European application of KBS technology (the total
value of SP249 project is about 2.5 MECU) mark a milestone for the KBS technology
applications the area of power plant operation and management in Europe. It opens the way
for further applications in the area and establishes the KBS technology both as a part of the
modern CLA technology and as a powerful vehicle for technology transfer.
From the point of view of the applied KBS solutions, the SP249 system is a modem,
integrated, object oriented system As indicated by Jovanovic and Bogaerts (1991), the
conventional (production rule based) expert systems alone can deal successfully only with a
very limited range of practical problems in the domain of CLA technology. They often tend to
"block" the dialog at the moment when the user is not sure what to answer the system, either
because he needs an explanation of the question or because he is asked to provide (to the
system) some additional information which is currently not known/available. Possible answer
to this "blocking" of the dialog and similar related problems characterizing conventional expert
systems is to integrate tightly all additional systems modules (e. g. numerics including finite
elements, databases, etc.), so that the user is hardly aware that he/she is dealing with different
kinds of software. This idea is the base line of the MPA's approach called "KISS" Knowledge-based Integrated Software Systems, or Knowledge-based Intelligent Integrated
16
170
Software Systems, which has been applied in SP249, and it follows the same line of thinking
which lead to (e.g.) Intelligent Databases, Intelligent Hypermedia Systems KBSs with
Hypermedia Support, etc. (Parsaye et aL 1989).
By combining efforts of the CEC and European industries (utility companies and consulting
and research organizations) SP249 will become a milestone for large scale applications of the
KBS technology, both as a part of the modem CLA technology and as a powerful vehicle for
technology transfer.
ExpertChart IBA UT2.CHT]
S&j E'le
View
Window
Help
M
:',:':
V
Phase (21.1 ..2) (G01 3.12)
hYicaie microstructiral zones contarne, creep damage
V
Phase C21.1 .4.3) (G0133.4)
Merrnine the inspection interval
v
Damage assessment NOT possible OR
Inspection rterval NOT adequate
CONTNUE
wth Phase (212)
CONTNJE
wfh Phase (21.1)
CONTMJE
wth Phase (21.1)
15. Acknowledgments
The author wants to acknowledge herewith precious help and collaboration of partners in
SP249 (in alphabetical order): ATT (Allianz), Ismaning, FR Germany, EdF, Paris, France,
EDP Lisbon, Portugal, Endesa, Ponferrada, Spain, ERA Technology, Leatherhead, UK, ESB
Dublin, Ireland, GKM Mannheim, FR Germany, ISQ Lisbon, Portugal, IVO, Vantaa,
Finland, Laborelec, Linkebeek, Belgium, Tecnatom, Madrid, Spain and VTT, Espoo, Finland.
Support The C ommission of the European C ommunities and the staff of SPRINT-TA U
CEC, Luxembourg are highly appreciated. Special thanks go to the persons involved in the
project on behalf of their companies, in particular to Dr. L. Hagn, Allianz, to Messrs. G.
Thoraval and P. Rivron, EDF, to Mr. A. Batista, EdP, to Mr. E. Santos Endesa, to Messrs. J.
M. Brear and J. Jones, ERA, A BisselL to Prof H. R Kautz, GKM, to Dr. C. de Arajo ISQ,
to Mrs. U. McNiven and Mr. J. Rantala, IVO, to Mrs. Vereist, Laborelec, to Mrs. M. Aguado
Tecnatom, to Mr. P. Auerkari VTT, to Mr. P. Lwe, SPRINT TAU, to Messrs. Friemann and
17
171
Kluttig of MPA Stuttgart, and to all others who have in or the other form contributed in the
realization of this large European project.
16. References
ACT (1992). Advanced Computer Technology Conference 1992, held in Phoenix, Arizona, December
911, 1992, Proceedings, Vols 1 and 2, published by EPRI Palo Alto, US, December 1992
Brear, J. M., Jones, G. (1994). A consolidated approach to component life assessment in SP249,
Proceeding of the 20th MPA Seminar, vol. 3, MPA Stuttgart
Brear, J. M., Jovanovic, A. (1992). SPRINT Specific Project SP249 "Implementation of Power Plant
Component Life Assessment technology using a KnowledgeBased System", Phase I
Definition, Final report, May 1992, ERA technology, Leatherhead, UK, and MPA Stuttgart,
FR Germany
Jovanovic, ., Bogaerts, W. (1991). Hybrid knowledgebased and hypermedia systems for engineering
applications, Avignon '91 Conference Expert Systems and their Applications (vol. Tutorial Nr.
13), Avignon, May 2731, 1991
Jovanovic, ., Friemann, M. (1994). Overall structure and use of SP249 knowledge based system,
Proceeding of the 20th MPA Seminar, vol. 3, MPA Stuttgart
Jovanovic, ., Friemann, M., Kautz, H R. (1992). Practical Realization of intelligent interprocess
communication in integrated expert systems in materials and structural engineering. Proc. of
the Avignon '92 Conference Expert Systems and their Applications (Vol. 2Specialized
Conferences). Avignon, pp 707718.
Jovanovic, ., Gehl, S. (1991). Some expert systems for power plant components in Europe and USA
Proc. of the SMiRT 11 Post Conference Seminar Nr. 13 "Expert Systems and AI Applications
in the Power Generation Industry", Hakone (Japan), Aug. 2628, 1991
Jovanovic, ., Maile, K. (1992). ESRA Large Knowledge Based System Project of European Power
Generation Industry. Expert Systems With Applications. Vol. 5: 465477
Parsaye, K., Chignell, M., Khoshafian, S. and Wong, H. (1989). Intelligent databases: Objectoriented,
deductive hypermedia technologies. John Wiley & Sons Inc., New York, Chichester, Brisbane,
Toronto, Singapore, 479 pp.
TRD Technical Rules for Steam Boilers; Deutscher DampfkesselAusschu (DDA), Vereinigung der
Technischen berwachungsVereine e.V. (VdTV), Essen
18
173
ABSTRACT
Inspection planning for pressurised power plant components is traditionally directly or
indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. The nonmandatory approach is becoming overwhelmingly dominant and provides routes for
improved overall economy in the inspection policies. However, the trend cannot override
fundamental component life and safety related requirements. This creates both a need and an
opportunity for systematic methodologies to manage the process of inspection planning. For
certain aspects of the process such tools already exist and are widely used, because they have
been available and useful even for the mandatory inspections. These tools include eg project
type planning and execution timing for the actual off-line work, as well as data management
and mapping of the inspection results. Until recently, however, much of the decisions related
to actual content and timing of non-mandatory inspections were not subject to such
systematic tools or methodologies. This is about to change with the increasing integration of
inspection data management, inspection planning tools, and decision making methodologies.
1. INTRODUCTION
Inspection planning for pressurised power plant components is traditionally directly or
indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. The rules
typically specify or suggest some aspects of
selection of the targets and methods as well as timing of inspections;
extent of inspections and management of inspection results; and
approach towards inspection results in terms of consequences.
The relatively stiff mandatory rules have generally best served their purpose in cases where
multiple failure mechanisms and relatively fast damage accumulation are not unreasonable
174
(eg for boilers) or where specific additional safety concerns apply (eg for pressure vessels of
nuclear plants). However, although the mandatory rules often reflect some industry
experience, they tend to be the same for all possible cases and therefore do not generally
provide optimal inspection policies which can be expected to depend very much on particular
cases and plants.
Instead, the non-mandatory approach of condition based maintenance is becoming overwhelmingly dominant route towards improved overall economy of inspections and life
management. This implies that within certain limits, only plant and case specific data are
used to define the inspection strategies for the specified components. Since this cannot
override fundamental component life and safety related requirements, the background data
should exist in a form that can be used for such decision making, and the decision making
process should extend beyond the simple ways of the mandatory rules. In the same time, ever
increasing amounts of on-line and off-line measurement data are available throughout the
service life of a power plant. This creates both a need and an opportunity to extend the use of
systematic methodologies to manage the process of inspection planning (Jovanovics et al,
1992).
For certain aspects of the process such tools already exist and are widely used, because they
have been available and useful even for the mandatory inspections. These tools include eg
project type planning and execution timing for the actual off-line work, as well as data
management and mapping of the inspection results. Until recently, however, much of the
these items have not been combined together. More importantly, the decisions related to
actual content and timing of non-mandatory inspections have not been subject to such
systematic tools or methodologies. This is about to change with the increasing integration of
inspection data management, inspection planning tools, and decision making methodologies.
Advanced inspection planning makes full use of such tools and appears to carry considerable
promise for avoiding unnecessary outages, inspections and repairs, and for focusing the
inspections towards controlled life management.
For the present purpose, such tools must make use of
the rules to decide and quantify the order of merit between plants, if the analysis is
extended to account for inspections involving several plants;
the engineering factors that define the present and foreseeable future condition of
components;
the non-engineering factors that affect the final decision-making on inspection planning;
and
the decision-making methodologies that create the logical flow of inspection planning
using the above rules and factors as framework.
Below, the examples are mainly confined to the hot pipework of fossil fired power plants,
looking at the creep dominated regime of operating conditions. Also, the main domain of
consideration is limited to cases where predictive rather than corrective maintenance is likely.
175
FACTORS
The inspection planning process may initially involve decisions on timing between several
plants according to the plant characteristics and availability needs. However, here the view is
basically limited to the narrower view of planning and timing of inspections for one plant.
Then of the engineering factors to be considered, some are related to service loading, ie
stresses, temperatures and time in service, and their distribution
number and character of startup cycles and other major thermomechanical cycles; and
environmental effects on components (oxidation, corrosion etc).
The environmental factors are generally not very significant for the pipework except for
indirect use in oxide thickness based temperature/time assessment and oxide dating of cracks.
The other service loading factors can be initially tackled by using stress analysis (to indicate
locations of interest if nothing else) and life consumption assessment methods analogous to
TRD 508, ASME CC N-47 or equivalent approaches. Even such a nominal type of
assessment is not possible without knowledge of another major group of significant
engineering factors, related to the material and component response to the service loading. In
its elementary form the required information includes the nominal materials data for the
given materials type, the geometry of the piping and the boundary conditions for the support
system. For actual life assessment type of evaluation, much more information is needed, such
as
material, component and location characteristics in detail; and
existing service-induced, manufacturing and assembly-related damage, ie results from
recent and earlier inspections as well as details on how these were carried out; such
measurements of damage indications can also include displacements, strains, hardness
values etc.
These essential items are mostly not available without (possibly repeated) measurements or
inspections in plant, for which guidelines exist (eg VGB R509L,1984; VGB TR 507,1992;
Auerkari et al, 1992; Auerkari, 1995). A further set of important engineering factors is related
to the inspections or measurements themselves. Such factors involve
component and location-specific accessibility for inspections;
component and location-specific sensitivity and resolution of measurement; and
quality, coverage and representativeness provided by the techniques that are used.
It is also important to realise that much of the available information on the engineering
factors and the state of the structures is patchy at best, and almost never as complete as eg the
life assessment theories would ideally require. However, there are often ways to overcome
such difficulties because
176
not all factors are equal in value for actual inspection planning; for example, usually the
latest measurements provide more important information on the component condition than
earlier measurements or nominal (design) data;
missing data can be often replaced by parallel information or from other experience; and
inspection strategies can be designed to improve thin and patchy databases with minimum
effort in additional inspections.
Classical examples of rules on inspection timing can be seen in the applications of replica
inspections. In this case the typical extracted rules for planning of the next inspections are
based both on latest measurements in the inspections and on more general experience (Table
1)
Table 1. Example rules for timing of the next inspections, based on most recent observed
class of creep damage; t = time in service. The numbers in parenthesis for the
Neubauer/Nordtest case refer to recommendations after the service time exceeds 100 000 h.
Recommended maximum service time to next inpection
Damage
class
Neubauer/
Linear fraction
Linear fraction,
Nordtest 010
(Shammas direct) (evened lower bound)
1 (no
cavitation)
No specified
limits
7.33 t
4t
2 (isolated
cavities)
20 000 h
(40 000 h)
1.171
1.5 t
3 (orientated 15 000 h
cavitation)
(30 000 h)
4 (microcracks)
5 (macroscopic cracks)
10 000 h
(20 000 h)
0
2t/3
0.19t
0.25 t
177
As is seen from Table 1, one may need to select between alternative rules. This is not merely
a task to appease personal preferences but should reflect other available information or any
hints from the service or maintenance experience that could weigh in favour of a certain
approach. For example, if it is known that the location of current interest has not experienced
any significant additional loading excursions during its service time, it may be appropriate to
use the life fraction rules of damage (linear fractions in the Table 1). These are supported by
a limited experimental evidence for a low-alloy steel (Shammas 1988; Tolksdorf & Kautz
1994). However, if we know that significant additional loading has occurred, eg because of
supports of the pipework have not functioned, then it is likely that the damage process has
accelerated towards the end of the expired service time. In such cases it may be safer to use
the Neubauer/Nordtest type of fixed time rules (Neubauer & Wedel, 1984; Nordtest NT NDT
010,1991), because these are based on results from plant inspections, including a
considerable number of cases with non-functioning pipework supports.
In the future, the new 9 to 12 % Cr steels and other newer steels will be used in an increasing
proportion, and for many of the materials the experience-based evaluation mies are yet to be
created. This means also that caution is needed in utilising the present rules for new situations.
It is seen that the number of potentially influential engineering factors can be quite large, and
the available information variable in character. To assess all the necessary engineering
factors separately each at the time would impose a serious burden to any person trying to
deduce optimised inspection plans, and hence there is a fairly obvious opportunity for
computerised decision-making tools to help in creating such plans (Jovanovics et al, 1992).
Naturally, no such tool is any better than the rules on which it is working. In addition, it is
necessary to consider non-engineering factors that are essential for proper inspection
planning.
2.2 NON-ENGINEERING
FACTORS
The underlying criteria of optimisation of inspections include the overall economy in plant
operations, including the value of availability, economy of inspections, economy of analysis
itself, as well as safety and environmental requirements. To the extent these are not hard-core
engineering factors, the engineering factors therefore do not determine optimal inspection
plans alone. The non-engineering factors may be required as boundary conditions to the
inspections, or enter directly as optimising variables of which money consumption must be a
major one.
Consequently, some of the most important non-engineering factors include eg the price of
replacement power, availability requirements and the local price of any action needed, such
as inspections, repairs or replacements. The background economical factors as cost of
potential consequenses to be avoided are at least partly measurable as insurance premiums,
but there are local variations. Many of the variations can be seen in local, national or regional
mandatory rules and traditions, at least in their extreme forms. Furthermore, in spite of their
engineering background, there are borderline and non-engineering features in the differences
between the local, company-related, national or regional traditions in design, inspections and
life management. For example, design rules based on ASME / BS or similar codes, and TRD
and equivalent codes produce somewhat different results because of some tradition-based
compromises that attempt to balance between engineering simplicity and rigorous analysis.
Some of the differences in tradition can be seen from Table 2.
178
Some additional inherent differences are revealed by looking at the failure statistics of these
regions. In case of the ASME/BS tradition, the literature citing failure cases refers fairly
often to the problem of ligament cracking of headers. Such cracking which is rare in the TRD
regions appears to be caused by the thermomechanical cycles of plant operation, combined
with the relatively thick (compared also to the ligament width) header material in the
anglosaxon design. This is exacerbated by using low alloy materials such as 2.25Cr IMo
steel, which requires much thicker walls than the higher alloyed steels like X20CrMoV12-l
with same steam values.
The TRD type of tradition appears to have its specific Achilles' heel also. In the literature of
past 20 years or so its nearly exclusively from the germanic design origin that problems with
steam line bends are reported. This frequency is by no means high: it is at least two orders of
magnitude less than for creep damage observed from major circumferential welds. However,
since the creep damage in bends is more severe from the safety point of view, i.e. unlike
damage in welds it can lead to catastrophic failures in the base metal, it has occasionally
received much attention in inspection programmes.
Comparable traditions can be seen in local, national or regional mandatory, eg safety-related
rules.
Table 2. Typical regional features (until about 1990) in large coal-fired power plants.
ASME/
BS etc.
TRD&
equival.
565
2.25CrlMo
Specific pressure
vessel authority
1000
discs on shaft
noni
600
monobloc/
welded
exis
or l/2CrMoV
545
12 Cr
Also, the non-engineering factors include the local experience and possible training needs of
the employees or inspectors involved. For example, when very experienced operations and
maintenance personnel retire or move company, it may even become optimal to extend some
inspections or other measurements a little to provide the new personnel the additional feel of
the plant condition that was perhaps lost with the experience.
Of the regimes where some mandatory boundary conditions will remain in the future safety
and environmental issues are probably most important. In the recent past much of the safety
issue has been tackled so that more or less standardised engineering, organisational and
regulatory solutions exist everywhere. Meanwhile the environmental issue has gained more
and more weight, becoming a very significant cost issue. Therefore, any component
179
dysfunction that deteriorates the plant performance in these terms also becomes a cost item
and must be included in inspection planning somehow. However, for our example case of hot
pipework this is hardly an issue, whereas the safety aspects are.
180
181
4. SUMMARY
Inspection planning for pressurised power plant components is traditionally directly or
indirectly subject to mandatory and non-mandatory rules or guidelines in Europe. Until
recently, much of the decisions related to actual content and timing of non-mandatory
inspections were not subject to systematic tools or methodologies. This is about to change
with the increasing integration of inspection data management, inspection planning tools, and
decision making.
The optimisation process for inspection planning in practice translates into balancing the
necessary information for such planning with the economy of obtaining it. The economical
aspects include the economy of inspections, analysis, and consequences of not meeting the
desired condition for the specified time. Such consequences are measured in cost of
replacement power, required repairs, insurance etc, but also in more fuzzy terms such as
possible impact to environment, company image in public relations or towards regulatory
bodies. It is generally not optimal to extend the quest for information and analysis beyond a
point where the additional information no more pays off. This leaves a factor of uncertainty,
which however is reduced by using a number of different quantities for indicating the
condition and cost items. Such quantities can be made comparable even when they are
initially of totally different type, such as crisp numbers, probability distributions, and fuzzy
expert opinions.
A significant though apparently hidden cost factor within the inspection planning analysis
can be the inconvenience of obtaining the required information. If the system that is used for
such analysis is internally very "stiff, ie accepts only complete sets of extensive data on each
location of interest, it is eventually likely to fall into disuse because at least initially any data
on the components are necessarily scarce. To minimise such problems, a good inspection
planning optimiser would accept patchy initial data and cover the missing pieces with default
values from nominal data or parallel experience.
Because of the time-dependent changes in plant, optimisation for inspection planning is necessarily
a dynamic process. However, the future boundary conditions for inspection planning are also likely
to change. For example, a new share in terms of materials and processes for power production will
slowly emerge. For many of the newer materials the experience-based evaluation rules are vague or
non-existent, and in this sense the optimisation is also internally a moving target.
5. REFERENCES
Auerkari, P., 1995. NDT for high temperature installations - a review. IIW Commission IX
WG Creep, VTT Report VALB96, Espoo. 22 p.
Auerkari, P., Borggreen, K. & Salonen, J., 1992. Reference micrographs for evaluation of creep
damage in replica inspections. NORDTEST NT Technical Report 170. 41 p.
EBSOM- European Benchmark Study on Maintenance. EUREKA Project EU.724. MAINE /
EBSOM Kunnossapitoyhdistys (Fl), Freningen Underhllsteknik (S), Norsk Forening for
Vedlikehold & Den Danske Vedligeholdsforening (DK), December 1993.
182
Jovanovic, ., Maile, , Friemann, M., Auerkari, P., Vrhovac, M., Rantala, J., Gehl, S. &
Viswanathan, R, 1992. Knowledgebased system aided evaluation of replica results in terms of
remaining Ufe assessment of power plant components. Paper 48, 18th MPA Seminar, Stuttgart. 24
P
Neubauer, . & Wedel, U., 1984. NDT: Replication avoids unnecessary replacement of power
plant components. Power Engineering, May, p. 44.
NORDTEST NT NDT 010, 1991. Remanent lifetime assessment of high temperature components
in power plants by means of replica inspection. 6 p. + app.
Shammas, M.S., 1988. Metallographic methods for predicting the remanent life of ferritic coarse
grained weld heat affected zones subjected to creep cavitation. Int. Conf. on Life Assessment and
Extension, Den Haag. Vol ILL p. 238244.
Tolksdorf, E. & Hald, J., 1994. Experimental methods for determination of the creep and fatigue
damage conditions of power plant components. Int. VGB Conf. on Measures for Assessment and
Extension of the Residual Lifetime of Fossil Fired Power Plants, Moscow May 1621. l i p .
Tolksdorf, E. & Kautz, H.R, 1994. Assessment of theoretical models for determination of
remaining life. Int. VGB Conf. on Measures for Assessment and Extension of the Residual
Lifetime of Fossil Fired Power Plants, Moscow May 1621. 17 p.
VGBTW 507, 1992. Guideline for the Assessment of Microstructure and Damage Development
of Creep Exposed Materials for Pipes and Boiler Components. VGB, Essen. 83 p.
VGBRichtlinie R509L, 1984. Wiederkehrende Prfung an Rohrleitungsanlagen in fossilbefeuerte
Wrmekraftwerke. VGB, Essen. 28 p.
10
183
1. Introduction
The power plant components operating at high temperatures are important targets in the inservice inspections and measurements. Apart from being large and expensive and subjected to
complex mechanical and thermal (creep-fatigue) loading in service, these components can limit
the availability of the whole plant. Due to ageing, these components need additional
monitoring, repairs and replacements.
Such components include typically
boiler tubing, superheaters and reheaters;
headers, valves, T- and Y-pieces and the rest of the hot pipelines;
hot parts of the steam and gas turbines.
The safety aspects of design impose that the nominal (design) life, e.g. 200,000 service hours
and 1000 cold starts, is considerably shorter than the true average life for these components at
nominal (design) service loading leveL Reasons for this include e.g. using lower bound values
for material strength in design and upper bound dimensions in rnanufactxrring. The extent (or
occurrence) of excess life potential is not certain. Furthermore, overloading, overheating or
other disturbance not accounted for in design, can on the other hand, considerably shorten
component life.
Whenever feasible, extension of life or inspection periods is to be recommended, not only
because of the direct cost impact but also because any unnecessary maintenance compounds to
a significant additional risk for damage and failures. For example, excessive residual stresses,
embrittlement or cracking after unnecessary repair welding and local heat treatments will
occur at a non-zero (and high in case of susceptible materials in stiff structures) probability.
Nevertheless, timing of maintenance is always an optimisation problem, since also too lax
maintenance or too long maintenance periods will lead to costly unexpected shutdowns.
The amount of relevant background information, extent of data of the service, inspection and
maintenance history, number of locations of potential interest in a large system, as well as
needs for relatively long term systematics and expert experience, all support the view that
much of the work would be ideally handled by an application-oriented decision support
system. Following the initial concept [7], such a system for computer-aided planning on
forthcoming inspections of high temperature piping in fossil-fired power plants has been
developed in the European Union research project BE5935 [6]. Initial concept for the part
regarding the inspection results interpretation has been gfven by Auerkari [1], in connection to
the recent guidelines of Nordtest [2] and VGB [11].
184
Input to the system are results of previous inspection (if available), data about the piping
component and strategic constraints resulting from the importance of the component, the
desired level of confidence, etc. Based on the integration of several elements the system
produces a final output in form of a "component vs. year" matrix showing:
what inspection technique (replica, ultrasonic, etc.) and
to what extent (e.g. how percent of welded joint examined)
should be applied at a given location/component during the next inspection (overhaul).
185
186
Locations
Methods
Extent
Endoscopy, MT/PT +
RT,UT
Check of operation
(UT ofint, surfaces or
endoscopy)
Component internals
Check of operation,
MT/PT+RT (welds),
check of inside
wear/cracks
MT/PT n. 100 mm
wideof welds, RT
according to
findings
Bend MT/PT+RT
minimum wall (UT) +
ovalityl)
MT/PT+RT, UT at
welds, wall
thickness U and
strain^)
MT/PT of welds
100 mm wide, RT
ace. to indications,
UT for body
Deaeration/dewatering
nozzles (spot test or by
experience)
MT7PT+UT where
water may be
trapped^)
MT/PT+RT, welds
MT/PT externally
187
On each of these two levels the critical decision node is placed where a multi criteria decision
making (ranking) problem with crisp, fuzzy and/or random inputs has to be solved. Crisp
inputs are e.g. certain crisp numbers (e.g. number of operating hours). Fuzzy inputs are e.g.
those involving linguistic variables (e.g. "high risk") represented in terms of membership
functions. Random inputs are e.g. the stochastic input variables (e.g. temperature) represented
in terms of probability distributions.
Goal of the complete system represented in Figure lis to provide a new inspection plan for all
selected components/locations.
All Component Level
User's preliminary
selection of inspec
tion items (systems,
compon., locations)
I
Decision node:
Ranking of inspection
items ace. to ranking
criteria
I
One Component Level
Selection of inspec
tion item ace. to
position in ranking
list
I
Decision node:
Determination of
inspection strategy
for inspection item
Inspection Results
5S'. I
Criteria:
Importance, Prev. results
Cost, Safety, Environment
Past history,.
COMPONENT LOCATION
RANKING (COLOR)
Rank
Component/ Type
Location
#2873
Header
2
#3987
T-piccc 3
WmsSm Y-Ktci '.:^-:
Criteria:
Performing of inspec
tions; determination
of next inspection
time and scope
mmmm W&M
mm
188
order to cope with all of them, the developed decision support system consists of the
following elements:
1.
a flowcharting part enabling to model the inspection/evaluation procedure
graphically;
2.
a knowledge-based system part controlling the user's movement through the
procedure;
3.
multi criteria decision analysis modules (COLOR, ISTRA) optimising the
selection of possible alternatives in each decision node;
4.
a hypermedia part providing the explanation facility;
5.
a numerical calculations part providing additional input (e.g. calculation of
consumed life according to standards, etc.).
4.2 In teiligen tflowch arting m odule
The modelling of the problem domain is done with an "intelligent" flowcharting program. The
"intelligence" of the program refers to its interaction with a knowledge-based system
controlling all movements in the flowchart. In that way, the resulting integrated module acts as
a user-advisor, assisting the user in facing the problem in a recommended way and allowing
him not only to obtain information and recommended actions from the other modules (MCDA,
Hypermedia) but also to input his personal thinking and/or experience, where uncertainty
exists. Finally, with the use of the system the user avoids possible overlooking of significant
aspects of the procedure.
4.3 Multi criteria decision analysis (MCDA) modules
The two modules, namely COLOR (COmponent/LOcation Ranking) and ISTRA (Inspection
STRategy Advisor), developed for the analysis of the two decision nodes shown in Fig. 1, are
both application-oriented and will be described later on in detaiL However, the underlying
methodology is very general and applicable also in other fields, where modelling of
uncertainties is mainly based on experience.
The applied methodology [8] is an extension of Saatys [9], as amended by Buckley [3,4]
in order to incorporate fuzzy comparison ratios. In such a way, it is much easier to model
uncertainties regarding comparisons of criteria, on which the decision has to be based, as well
as ranking of alternatives with respect to each criterion.
Both modules can also handle crisp and stochastic inputs. In this way, they model also
situations where no or stochastic uncertainty exist.
4.4 Hypermedia and numerical calculations modules
Both modules are integrated in the decision support system in order to provide related
background information in each step of the overall process. The related information, consisting
mainly of experience-based recommendations and/or guidelines, may be retrieved
automatically. Furthermore, numerical calculations based on them can provide additional
input.
189
5. COLOR MODULE
5.1 Alternatives
The alternatives for the ranking procedure are the different components of a power plant. For
each type of component there might be different locations. In general the hst of the alternatives
may be like the following one:
5.2 Criteria
To model the multi criteria decision of component/location ranking the following criteria were
defined:
1.
Fundamental importance of the component for the present plant (seriousness of
failure/downtime)
2.
Results of previous inspections
3.
Cost of replacement of component
4.
Safety aspects (including regulatory safety aspects)
5.
Environmental aspects (including regulatory environmental aspects)
6.
Qualitative past service history
7.
Quantified past service history
8.
Expected change in the operating conditions used for the analysis so far
9.
Alternative supply patterns (Le. relative importance of the component in
comparison with existing alternatives).
Table 2 gives the types of input values and an example of the relative weights of the different
criteria calculated by a pairwise comparison.
Table 2: Types of input and weights for criteria of component location ranking
Name of criteria
Fundamental imp. of component
Results of previous inspections
Cost of replacement
Safety aspects
Environmental aspects
Qualitative past service history
Quantified past service history
Expected change in the op. conditions
Alternative supply patterns
Input type
Fuzzy
Crisp
Crisp
Fuzzy
Fuzzy
Fuzzy
Crisp, Stoch.
Fuzzy
Fuzzy
Relative weight
0.122(70)
0.175(100)
0.122(70)
0.070 (40)
0.070 (40)
0.105(60)
0.053 (30)
0.140(80)
0.140(80)
190
6. ISTRA MODULE
6.1 Alternatives
The selection of the inspection strategy is done after the selection of the inspection locations.
For the selected locations on the different components there are five types of inspection
strategies or patterns possible:
"zero-attention" program
"low-profile" program
"standard" program
"extended" program
"extensive" program
In addition the detailed description of methods and extents of inspection are given in Table 3.
Table 3: Overview of the extend, costs and reliability of the alternatives
Inspection
Name of inspection
time, max.
strategy
0 days
"zero-attention" program
3 days
"low - profile" program
5 days
"standard" program
7 days
"extended" program
2 weeks
"extensive" program
6.2 Criteria
To model the multi criteria decision of inspection strategies the following criteria were
denned:
1.
Inspection and other directly related maintenance cost (e.g. preparation cost)
2.
Additional difficulties due to access
3.
Implicit risk due to safety aspects
4.
Component priority (result from COLOR)
Table 4 gives the types of input values. The relative weights of the different criteria should be
calculated by an expert.
Table 4: Types of input for criteria of component location ranking
Name of criteria
Inspection and other
related maintenance cost
Additional difficulties
due to access
Implicit risk due to safety
aspects
Component priority
(result from COLOR)
Type of input
values
Fuzzy
Fuzzy
Fuzzy
Crisp
Optimisation goal
Minimise (for higher level inspection
patterns cost are increasing)
Minimise (more difficult access to
inspection region forces lower level
inspection pattern)
Minimise (higher safety needs forces
higher level inspection patterns)
Maximise (higher component priority
forces higher level inspection patterns)
191
7. PRACTICAL APPLICATIONS
7.1 General
So far, within the BE5935 project, the methodology for inspection planning has been deployed
in two power plants, namely in GKM, Germany and in IVO, Finland (Table 1).
Table 5 Overview of considered for preUminary and detailed industrial problems in Task 513
(Decision Making for Inspection Planning)
Partner
Type of component
Preliminary analysis
Detailed analysis
MPATVO/VTT
MPA/GKM
Yes
Yes
partly
No
Yes
Yes
MPA/GKM
They illustrate how the methodology developed in the previous tasks of BE 5935 ("The BE
5935 FBMCDM Methodology") can be practically applied on industrial level. In both
application it was necessary to provide a practical and usable engineering answer to the
following main questions:
a) WHAT (Le. which components/locations and with which priorities) to inspect
b) HOW (Le. using which inspection methods and in which scope) to inspect, and
c) WHEN (Le. after how many operating hours ) to inspect
The methodology developed in BE 5935 enables to substantially improve current engineering
practice for achieving answer onto each of these three questions.
7.2 WO-Example
2801
\
^ v J
Ttmo
2802
/ ^
2 3
'
rflW^C 2803
Z804
2202/
/ __-Z8G7
/
2004
_
Z808
2203
2806
Z805
\
^ *
^^
\ ^ - ^
^-^^>^2t)02
2201
ZOOS
_-Z80
T-yse3aZ301
lhtee Soja 2RA14
z ^ ^-Z810
<T
2204^
Z815
m m
Z20
^Z811
--Z812
__.^.Z813
Z205
192
The material is 13CrMo44 with a nominal temperature of 545C, and the situation is given in
1993, after 110,000 service hours.
To support the complex decision making process a modelling tool, namely ExpertChart, was
developed and integrated to the decision support system ExpertChart is used to:
1. Model the problem domain
2. Lead the user through the problem
3. Provide background information
4. Perform analysis and calculations
The problem domain is modelled through flowcharts. Activities are represented by boxes and
their interconnections by lines and arrows. Each activity may be detailed on a sublevei, which
can be a complete chart on his own (activated by the small checked rectangle in the upper left
comer of the box). Traditional if-boxes are translated into pre- and post-conditions. These
conditions are also needed for leading the user through the flowchart. The flowchart modelling
the inspection planning for the IVO steam line is shown in the following figure:
Inspection Scheduling "All Component Level"
[
() Oth tr ccmponinU m
the COLOR ranking
Est available?
(7) End analysis
Decision
node"A"
10
193
J_
| Sigrnffcanl c o n d i t i o n change I
(52) Recalculation n e c e s s e i y ?
(5.5) R u n I S T R A m order t o
c h o o s e i n s p e c t i o n strategy
Decision
node"B"
e gin t e s t i n g c ondition
-L.
Showtime condition
(5.10) S h o w r c m m a n d e d tim
for next inspection. A d d results
t o inspection p l a n (matrix)
Macrocracks vilhout
cavities
(522) Repair / remove; re
inspect any repairs within 10 kh
(3j6)Retnspectwdhifi
20 kh; bend within 13 kh
Cavities class 3 2
(Noxdlest)
Cavities class 11
(Hoi diaci)
Cavities d a s t 3 3
(Hordtest)
Cavities class 2 .3
(Nordlest)
Microcraeks without
critics
Condition 1:12
11
194
J_
JC
e <0.1% or unknown
(5.9.2) Unexpecte d macrostructure, hardness or oxidation
e>0.1%
> 10 degrees C
The inspection planning procedure for this example follows the steps below:
Step 1
[Box(l)]
Is this the first inspection for this system? (Yes/No). User selects
"No" since there are previous results from 1986-87.
Step 2
[Box (2)]
Step 3
[Box (3.1)]
Step 4
[Box (3.2)]
Step 5
[Box (4)]
12
195
Step 6
[Box (5.1)]
Step 7
[Box (5.2)]
Step 8
[Box (5.3)]
Step 9
[Box (5.4)]
Step 10
[Box (5.5)]
Step 11
[Box (5.5)]
Step 12
[Box (5.6)]
Step 13
[Box (5.6)]
Step 14
[Box
(5.8.1)]
Step 15
[Box
(5-8-5)1
Step 16
[Box
(5-9-1)1
Step 17
[Box
(5-9.2)1
Step 18
[Box
(5-9-6)1
13
196
Step 19
[Box
(5.10)1
Step 20
[Box (6)1
Step 21
(7)1
Cost
zero-attention
program
low
profile
program
standard
program
extended
program
extensive
program
none
Additional
Implicit risk Component
difficulties due to due to safety (importance)
aspects
accessibility
0.1
easy
high
low
standard
medium
0.2
medium
difficult
medium
0.3
high
difficult
low
0.4
very high
difficult
low
0.5
priority
Table 7: final output of the system (Recommendation 1993 for the example case)
Comp. Comp.
No.
)
8
#507
Component
type
Steam mixer
Next
inspection
Monitoring +
next year int.
inspection
within next
15,000 h
within next
20,000 h
within next
20,000 h
#813
Butt weld
#803
T-piece weld
#815
Terminal weld
to reduction
valve body
Method
Extent
MT/PT,
RT, UT at
welds
14
Table 8:
Comp. Comp.
)
No.
Type of
component
Cost of
Typical Situation ace. to
replacement
previous
downtime
[ECU]
inspections
cost
Safety
priority
Environ
mental
priority
Qualitative
past service
history
Quantified past
service history
[Equivalent hours]
Future
service
conditions
Alternative supply
availability
#802
piece weld
Medium
25k
Medium
Low
Mild
110000
No changes
Average
#803
piece weld
Medium
25k
Medium
Low
Average
110000
No changes
Average
#202
High
10k
High
Low
Mild
110000
No changes
Average
#204
Pipe bend
horizontal
High
10k
High
Low
Mild
110000
No changes
Average
#205
High
10k
High
Low
Severe
110000
No changes
Average
#813
Straight pipe /
bend weld
Low
8k
Low
Low
Severe
110000
No changes
Average
#815
Reduction valve
+welds
Medium
45k
Medium
Low
Mild
110000
No changes
Relatively low
#507
Mixer
Medium
30k
Medium
Low
Average
110000
No changes
Relatively low
#801
Straight pipe
weld
Low
8k
Medium
Low
Average
110000
No changes
Average
10
#301
T-piece
nozzle+weld
Medium
25k
Medium
Low
Average
110000
No changes
Average
11
#201
Horizontal bend
High
10k
High
Low
Average
110000
No changes
Average
12
#804
Straight pipe /
bend weld
Low
8k
Medium
Low
Average
110000
No changes
Average
13
#805
Straight pipe
weld
Low
8k
Medium
Low
Average
110000
No changes
Average
14
#806
Straight pipe /
bend weld
Low
8k
Medium
Low
Average
110000
No changes
Average
198
7.3 GKM-Example
On this application the whole piping from a boiler to the turbine inlet is considered. The
material is 10 CrMo 9 10 with a nominal temperature of 530C, a nominal pressure of 250 bar
and the situation is given after 200,000 service hours.
CRITERIA
Input data
COMPONENTS
kind of input data
weight
Environ
mental
priority
Qualitative past
service history
Quantified past
[equivalent hours]
Future
service
conditions
Alternative
supply
pattern
fuzzy
fuzzy
fuzzy
crisp
fuzzy
fuzzy
70
40
40
60
30
80
80
Cost of
Safety
replacement priority
[1000 ECU]
Typical
downtime
cost
Results of
previous
inspections
fuzzy
crisp
crisp
70
100
No.
max
max
max
max
max
max
max
max
max
medium
15
high
high
average
111000
no changes
relatively low
200 - Montagenaht am
Kesselaustritt
101 - Montagenaht
low
15
medium
high
average
200000
no changes
relatively low
Bl - Bogen
medium
75
high
high
average
200000
no changes
relatively low
4 - Werkstattnaht
low
15
medium
high
average
200000
no changes
relatively low
201 - Montagenaht
low
15
medium
high
average
200000
no changes
relatively low
B2 - Bogen
medium
75
high
high
average
170000
no changes
relatively low
202 - Montagenaht
low
15
medium
high
average
200000
no changes
relatively low
B3 - Bogen
medium
75
high
high
average
200000
no changes
relatively low
109 - Werkstattnaht
low
15
medium
high
average
200000
no changes
relatively low
10
54 - Bogen
medium
75
high
high
average
200000
no changes
relatively low
11
114 - Montagenaht
low
15
medium
high
average
200000
no changes
relatively low
12
203 - Montagenaht am
Kesselaustritt
medium
15
medium
high
average
no changes
relatively low
200
Microsoft A ccess
Eile
dit
Xew
ecords
yyindow
Help
Components
poctian plaitniog i
tMcJkxj-iion
i ^
M K JMontagenahl am Koctiauartt
; S ,
: ;a;l^pgiSmSiwm^4aa
^ | | | K | P A S37 270 / 5
Ino chances
A
A
;. W etive*W*>.t*<M.. jrdabveVlow
^VWV;SVVWTS:::>
kJ
:i
v:^;:;:::S:::^;w^^;;^^;^>^;^;^;^:^^^;^^>v::
Wj^n'ecad
flgsiwsstfet *)^&^:?&$.
m*
ni
'J"U.UIIl'TJLUJLIJJ
niiiaiswi
mmmmi
160.00
140.00
Figure 10: Graphic illustration of the result of the ranking tool COLOR
As already shown in example, the next step when using the decision support system is to
establish the appropriate inspection strategy using ISTRA module, and to perform the
recommended tests. The developed database, interacts with the whole decision support
system, enabling a feedback of all the information gathered with the various techniques
applied.
201
8. Conclusions
The applications of the decision support system in the IVO and GKM power plants, confirmed
the capability of the system to efficiently use the experience of local domain experts and the
service history to quickly make a first draft of the inspection plan. The overall system
represents a helpful tool for maintenance of power plant structures.
In the future, the system will be coupled with a NDT database and used primarily for
preliminary screening and "drafting" of the annual inspection plans. Experts' revision of these
drafts will remain a mandatory part of the overall procedure.
9. Acknowledgements
Some of the work presented in the paper has been accomplished within the European Union
research projects SPRINT SP249 and BRITEEURAM BE5935. In addition, some of the
results have been achieved under the BriteEuram Fellowship Contract No. BRECT933039
(fellowship for the stay and research of Mr. S. Psomas at MPA Stuttgart). This support is
gratefully acknowledged here.
10. References
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
Auerkari, P., 1993, "Guidelines for Inspection criteria of Hot Pipework", SPRINT SP249
Technical Report, VTT metals Laboratory, Espoo
Auerkari, P., Borggreen, K., Salonen, J., 1992, "Reference Micrographs for Evaluation of
Creep Damage in replica Inspections", NT Technical report 170, Nordtest, Espoo
Buckley, J. J., 1985a, "Ranking Alternatives Using Fuzzy Numbers," Fuzzy Sets and Systems
15, NorthHolland, pp.2131.
Buckley, J. J., 1985b, "Fuzzy Hierarchical Analysis," Fuzzy Sets and Systems 17, North
Holland, pp. 233247.
Jovanovic, ., Psomas, S., Ellingsen, H.P., Kautz, H.R., McNiven, U , Rnnberg, J., Auerkari,
P., 1995, "Decision support system for planning of inspections in power plants. Part JJ
Application in GKM and IVO power plants", to be presented at the Baltica Conference, June 8,
1995.
Jovanovic, ., Psomas, S., Schwarzkopf, Ch., Auerkari, P., Bath, U , Weber, R., Kautz, H.
R., Vereist, L., 1994, "Decision Making for Power Plant Component inspection Scheduling"
Report on Task 4.2/4.3 of BE5935 Project RESTRUCT DecisionMaking for
Requalification of Structures, Document TECT401, MPA Stuttgart
Jovanovic, ., Zimmermann, H.J., 1990, "Decision Making and Uncertainty in Life
Assessment and Management of Power Plant Components", Document BE3088/89, MPA
Stuttgart
Lieven, ., Weber, R., Bath U , Jovanovic, ., Psomas, S., De Witte, M., Vereist, L., 1993,
"MultiCriteria Decision Making Modelling Technology" Report on Task 3.1 of BE5935
Project RESTRUCT DecisionMaking for Requalification of Structures, Document TEC
T3101, MPA Stuttgart
Saaty, R. W., 1987, "The Analytical Hierarchy Process What it is and how it is Used", Math.
Modelling, Vol. 9. No. 35, pp. 161176
VGBR 509 L, 1984, "Wiederkehrende Prfungen an Rohrleitungsanlagen in fossilbefeuerten
Wrmekraftwerken", VGB, Essen
VGBTW507e, 1992, "Guideline for the Assessment of Microstructure and Damage
Development of Creep Exposed Materials for Pipes and Boiler Components", VGB, Essen
19
203
Introduction
There is a growing awareness by power plant operators of the benefits to be gained by applying
on-line plant condition monitoring techniques. Market forces are now demanding that plant, originally
designed for base-load operation operate more flexibly. Experience has shown that even plant that
has been designed for cyclic operation, can fail by creep-fatigue mechanisms induced by operational
transients not allowed for in design.
demanding, increased run times between outages, and reduced maintenance schedules. All these
factors make once-off investment in all forms of condition monitoring increasingly attractive.
This paper describes a system designed to monitor thick section, high temperature components
on-line for creep-fatigue degradation.
SG/BC61 ADMI/GTJ/doc-572
204
originally designed to address the problem of ligament cracking of steam headers prevalent in Europe
and the US. It is, however, equally applicable to other power plant components such as main steam
pipework, chests and casings and for process plant, thick section reactor vessels operating in the
creep range. There is also considerable interest in applying it to Heat Recovery Steam Generators
(HRSG) which are notoriously susceptible to creep-fatigue failures.
The basis function of PLUS is to convert signals, obtained from sensors (usually just thermocouples
and pressure transducers) strategically connected to critical locations on plant, to local stress and
strain values in real-time.
Using built-in advanced algorithms these values are converted and summed to give a realistic
measure of damage accumulation in real-time, or at convenient periodic intervals. It therefore serves
as both a life usage monitor an operations adviser (or alarm system) and thereby may be utilised as a
damage controller, and a maintenance planning tool. It may also be used as a simulator to assess
likely effects of changes in operation model.
PLUS is a fully integrated on-line system, with real time data monitoring and processing, providing
periodic on-line analysis with facilities for integration of off-line inspection/interrogation data. It is fully
customised for each application and enables plant specific geometry, design and history to be
accommodated together with the specific local operational behaviour.
Data capture module connects PLUS system with the site sensor data collection system. Its
precise structure is therefore dependant upon the nature of the existing facilities.
Data validation module interrogates the sensor signals and applies a number of consistency
checks and marks data deemed to be faulty. This module is also responsible for filing the data
in a time order, such that any file can be retrieved by means of a time and date identity.
The database module holds relevant aspects of the component geometry; the component
specific stress functions and selected inspection data pertinent to the assessment.
SG/BC61 ADMI/GTJ/doc-5 72
Using
205
The display module allows the operator to select a location on a component and to display
the stress at that location in real-time.
The life analyses modules in PLUS are determined by the components and degradation
mechanisms being monitored. The analysis modules are periodically activated by the operator
to
assess the consumed life based on newly available on-line plant data and the last
Ligament cracking develops from the inside crotch corner of the header and tube intersection and
propagates along both the header and the tube internal walls. The classical form of ligament cracking
occurs where the adjacent tube penetrations are closely spaced such that crotch corner cracks
propagate across the ligament, shown in Fig. 2. This classic form is particularly severe since it can
result in catastrophic failure. Ligament cracking along the circumferential direction may result in fast
fracture with the header breaking in two. Where adjacent element rows are closely aligned along the
length of the header, ligament cracks may also develop in the axial direction where the crack surface
are normal to the dominant hoop pressure stress. This again raises the possibility of catastrophic
failure. Localised cracking with a star burst distribution, illustrated in Fig. 3, where the cracks radiate
out from the perimeter of the tube hole is associated with isolated or more widely spaced penetrations.
Starburst cracking is unlikely to cause catastrophic failure but may cause steam leaks.
Ligament cracks removed from service (both in the US and Europe) have been found, without
exception, to be consistent with high strain thermal fatigue generated by severe thermal transients.
That is the cracks are straight transgranular gaping and oxide filled with no associated creep damage.
All studies support the pattern of development in which multiple cracks first initiate from the inside
corners of the tube penetrations, but growth is dominated by the primary cracks which propagate
across the ligament and towards the outer wall. European experience indicates that crack initiation
occurs relatively early (10,000 to 20,000 hours of operation), with a relatively long propagation period
generally exceeding 50,000 hours. This is contrary to the US experience where crack initiation occurs
much later, i.e. over 100,000 hours, and crack propagation is generally much more rapid.
The
difference in behaviour may be attributed to differing operating practices. Oxide notching has been
proposed as a crack initiation mechanism however this is not supported by European investigations.
Although no creep damage is observed for headers operating in the creep regime, creep relaxation of
high transient stresses contributes to the crack initiation and propagation mechanisms. Assessment
methods are therefore based on creep-fatigue analysis.
SG/BC61 ADMI/GTJ/doc-572
206
3.1
Analysis of the available data on the incidence of ligament cracking in Europe (Table 1) and US (Table
2) reveals that the European and US experience exhibits common factors as summarised below:
Header Type Superheater Outlet headers were found to be susceptible to ligament cracking.
The US and European experience indicates that secondary and final superheater outlet headers
were the most susceptible. There is also European experience, but little US experience of
cracking in primary and interstage superheaters.
Header Geometry The susceptibility to cracking increases with decreasing ligament width and
with increasing wall thickness, with all headers exceeding a certain thickness exhibiting
cracking.
Boiler Maker and Unit Size The European experience indicates that all makes and sizes of
unit are susceptible to ligament cracking, with some manufacturers' components being
inexplicably more vulnerable. Larger units, supported by US experience, also show increased
incidences of cracking.
Operating Hours and Starts Figures 4 and 5 show graphs comparing observed cracking data
in Europe with plant operating hours and number of starts respectively. No correlation is
evident between incidences of cracking and operating hours or number of operating cycles.
Cracking has been observed after comparatively few cycles, less than 500, contradicting the
view that ligament cracking is a two shifting problem.
SG/BC61 ADMI/GTJ/doc572
207
3.2
Concern about the incidences of ligament cracking led to quantitative assessments being carried out
on ex-service headers in Europe.
Finite element analyses were carried out for typical service start-up and shut-down cycles, and crack
initiation times and propagation rates were determined using high strain fatigue cyclic endurance and
creep ductility exhaustion models.
The analyses predicted considerably longer crack initiation endurances and much slower crack growth
rates than that determined by oxide dating techniques applied to the removed samples.
The
conclusion was that additional cycles were present that were not considered in the analysis.
Temperature monitoring to investigate the cause of thermal cycles responsible for ligament cracking
has confirmed that temperature ramp rates associated with normal two shifting operating cycles are
insufficient to generate the plastic strain ranges required to account for observed cracking. However
much more severe local transients were identified under certain operating conditions. Analysis of
monitored data indicated that major contributors to ligament cracking are:
Temperature cycles during hot starts (for example problems with coal flow and mills)
The fact that thermocoupling and continuous monitoring of vulnerable headers confirmed the
occurrence of previously undetected transients, demonstrates the importance of on-line monitoring for
accurate life prediction for headers.
Case Study
The PLUS System addressed here was commissioned to monitor the creep-fatigue damage
accumulation, predict crack initiation in the platen, final and reheater outlet stub headers and
manifolds, as well as to monitor creep-corrosion damage in the associated boiler tubing in 8 boiler
units of 350 and 650 MW. The significance of ligament cracking on plant integrity and the benefits of
life monitoring are demonstrated by consideration of the background to the problem.
4.1
Inlet steam temperatures are normally obtained from thermocouples on inlet tubes. The accuracy of
the estimate and definition of critical locations are improved as the number of tubes being monitored
increases. Additional thermocouples are also required to provide back-up in the event of thermocouple
failure.
SG/BC61 ADMI/GTJ/doc572
208
Thermocouples were installed on the two lead units of the case study plant. The selection of the
thermocouple locations was based on the identification of critical components using
previous
4.2
Stress Calculation
For the purposes of PLUS it is assumed that the ligament stresses can be uniquely determined from a
number of temperature differences and rates of temperature change within the header. The process
undertaken to generate the stress functions used by PLUS is illustrated in Fig. 6.
1
Surveillance data from the thermocouples and pressure transducers are processed and
analysed to identify the thermal boundary conditions, typical and exceptional transients
experienced by the components. Some of the thermocouple surveillance data are also used
to validate the FE heat transfer analyses output.
A wide range thermal transients using realistic heat transfer coefficients, steam ramp rates
and temperature changes are analysed for each geometry.
analyses are compared with plant data. Thermal boundary and heat transfer conditions are
refined until an optimum coaelation is achieved.
Each geometry under consideration is modelled using 3-D finite element analyses techniques.
For the case study, analysis of 32 geometries was required.
Finite element stress analysis is then performed for the thermal transients. The outputs of
these analyses provide inputs to and validation of the stress functions generated in step 5.
The hoop stress at a crotch corner location under a hot start condition is shown in Fig. 8.
Multiple parameter linear regression analysis is then carried out to relate the ligament stresses
from step 4 to the temperature differences and rates of temperature change. This analysis
produces relationships (stress functions) of the type
a = F(AThTi)
where , - temperature difference
T, - rate of temperature change
An example of the stress functions so developed is given in Fig. 9.
SG/BC61 ADMI/GTJ/doc-572
209
These functions enable direct calculation of stress and therefore strain from measured
temperature values obtained during PLUS operation, providing input to the creep-fatigue
calculation.
6.
The stress functions are implemented in the PLUS system to provide real-time variations of
ligament stresses as shown in Fig. 10. These real-time displays provide the operator with an
instantaneous output of the ligament stress which can be compared with a stress limit set to
prevent crack initiation in a specified number of starts.
4.3
The methodology adopted for PLUS assumes that any arbitrary cycle can be separated into an
elastic-plastic cyclic component and stress relaxation dwell component. The elastic-plastic cycling
causes low cycle fatigue damage and stress relaxation causes creep damage by a process of ductility
exhaustion. Both of these components contribute to the overall damage which is calculated using the
'linear damage summation rule'.
In carrying out the analysis each transient is resolved into discrete cycles. The hysteresis loop for
each cycle is constructed from the strain-time data generated by the stress functions by means of the
offset zero form of the Ramberg-Osgood equation
=| +
where A and are temperature and strain rate dependant materials parameters.
The fatigue damage component, D f, is obtained from the relationship
where
<
and
Nf
is the fatigue endurance as a function of the total strain range of the cycle
described by means of a suitable parametric equation.
The creep damage component, Dc, is calculated by a means of the ductility exhaustion approach
using:
Dc=-fP^dt
where t d
(t) is the instantaneous strain rate obtained from the stress relaxation relationship
SG/BC61ADMI/GTJ/doc-572
210
= [1-"1(7+1)]
,~
da/dt
(0 = *
with
t
"
" 1
The total creep-fatigue damage for each cycle, Dt, is calculated using the linear damage rule
Dt = Df+Dc
4.4
Built into the PLUS system consists of the evaluation of the above two types of damage mechanisms
algorithms, shown in Fig. 11. The modules read in the time and temperature data collected by the
monitoring system and calculate the associated stress and strain using the stress functions. After
resolving the data into cycles and dwells, the two algorithms establish the LCF damage and creep
damage components for each identified cycle and dwell period respectively.
The total damage for each location is established and stored in the PLUS database. Cyclic life usage
calculated from monitored data is shown in Fig 12.
The creep-fatigue life analyses are performed periodically by PLUS using past life estimates and
newly available plant data. PLUS is set up to automatically update the life estimate on a monthly
basis. The life analysis may also be performed at any time upon user instruction. The operator may
initialise the life analysis in two ways. A commit may be initiated where new off-line data is supplied,
and the life estimates are updated based on these and latest available on-line data, the results are
stored in the data base. Alternatively, life calculations may be performed at any time using the latest
available data. The results are displayed to the operator but not stored.
4.5
For periods of steady operation the accumulation of steady state creep is determined in PLUS using
Oc = E
where tr
Dc,,
is the allowable rupture time at the current operating temperature and stress
is the time for which the operating temperature and stress remain constant
DCini,
SG/BC61ADMI/GTJ/doc-572
211
Gret
is the reference stress calculated for each critical location on the monitored
components using inverse design procedures.
4.6
The relationship between the calculational assessment route resident in the monitoring system and
off-line inspection results should be reciprocal.
locations and times, it is possible to use inspection data to refine the system analyses.
The case study PLUS system is set up to enable quantitative microstructural damage assessments
made during an inspection to refine creep-fatigue damage assessment by PLUS. In setting up the
monitoring system assumptions were made regarding the position of a material in its property scatter
band and the evaluation of the reference stress, i.e. system loads acting to increase or decrease the
stress.
Since the creep life prediction algorithm can predict damage or strain evolution as well as final failure,
the predictions can be compared with observed damage or strain measurements.
Any differences between life fractions consumed determined by the reference stress technique used
by PLUS and that determined by off-line quantitative damage assessment will be due to materials
properties and/or the system stress uncertainty.
stress/strength it is not necessary to know whether this difference is due to stress or materials effects.
A simple stress correction factor calculated from the observed differences can be applied to future
PLUS calculations as a modification to the stress/strength ratio thus scaling calculated lives to the
observed damage accumulation rate.
4.7
Where cracks are detected in a component or predicted by PLUS, crack propagation may be
monitored by PLUS.
Crack monitoring utilises creep-fatigue crack propagation algorithms based on linear summation of
cyclic fatigue and creep damage. The cyclic component is obtained from fatigue crack growth laws
utilising stress intensity factor () solutions for the defect geometry:
{%)rA(AK)">
where A and m are materials properties
and the creep component is obtained from creep crack growth laws utilising C*:
*r\
jw
dt
212
and
= ( f ) c = (C-)M
(da\
\dNj ,
In cases where a leak before break situation is predicted, monitoring for steam leaks using acoustic
emission (AE) provides a practical safe alternative to the above algorithm based approach. In this
case the acoustic sensors are interfaced with PLUS allowing the system to raise alarms in the event of
a leak. Leaks may be detected by AE significantly sooner than the effects can be observed by normal
plant operating systems.
Conclusion
Temperature surveillance and life monitors have been demonstrated to be a very effective means of
monitoring high temperature components subject to thermal cycling. Wide experience of potentially
catastrophic ligament cracking in headers has shown the damage to be attributable to normally
undetected thermal transients.
ERA has developed a PLUS system which provides real-time temperature monitoring and processing,
and periodic damage and life assessment. The system also enables off-line inspection results to be
used to refine the analysis.
The customisation process and life monitoring functions of PLUS have been illustrated by a case study
application. The PLUS system addressed in this paper is currently being delivered to the clients.
Acknowledgements
References
SG/BC61ADMI/GTJ/doc-572
213
R Viswanathan
"Life Assessment of High Temperature Components - Current concerns and research in the US"
ERA Report 93-0690, Conf. Proc 'Life Assessment of Industrial Components and Structures',
Cambridge, Sept 1993
EPRI Report
' An Integrated Approach to Life Assessment of Boiler Pressure Parts', EPRI Project RP 2253 -10
SG/BC61ADMI/GTJ/doc-572
214
Table 1
Summary of Inspections for Ligament Cracking in a European Utility
Number
Inspected
Number
Cracked
% Cracked
Maximum
Depth, % wall
thickness
80
33
41
100
Primary
33
100
Interstage
28
29
100
43
22
51
69
32
20
All Reheater
18
N/A
130
35
27
100
Header Type
All Headers
Table 2
Summary of Ligament Cracking Experience in the US
Number
Inspected
Number
cracked
%Cracked
157
44
28
11/4Cr
73
26
36
21/4 Cr
76
17
22
Op.T>_1050C
14
43
Reheater Outlet
118
All Others
101
Header Type
Secondary Superheater Outlet
DISPLAY
MAN
ODM -
to
I
RLA
TLA
CDU -
TSU
PDU -
DDD -
ssc -
GDD -
FUNCTIONS
SHARED MEMORY
LOGGER/SERVER
O
MONITORED LOCATIONS
Fig.1:
HSD -
TLD
- DISPLAYS
OSD -
SFW -
Scope of PLUS
216
y<*
f* l i is "
mr'
Fig.2:
G BC61ADMI\GTJ\DOC-572
217
'^oSafe'.!-:.-
Fig.3:
G.BC61ADMnGTJ\DOC-572
218
100
co
c
Z
I
80
k.
*Q
60
40 f-
'* aw
Thousan
AM
150
100
219
Thousands
Number of Starts
Secondary and Final
Fig.5:
220
Plant surveillance
data
xr -{-- )
Filtering
& plotting of
data
Generation of FE
models
Thermal boundary
conditions
and data for FE
companson
Thermal transient
analyses
Optimum
correlation
yes
Stress analysis of
thermal transients
P;PPP!
550
X3
o
to
500
Ol
S 450
D)
to
to
0)
a>
eu
D.
400
E
-*FE manifold
350
ne Manifold t/c
- Estimated steam
300
4000
5000
6000
7000
8000
9000
10000
Time in seconds
11000
12000
13000
14000
15000
.282E+08
I
. 375E + 08
ETTI
.467E+08
t.'f
.560E+08
t. ! . 1 .653E+08
.746E+08
.838E+08
.931E+08
.102E+09
.112E+09
. 1 2 J lit 0 9
.130E+09
.139(^09
.149E+09
.158E+09
167E+09
. 177E+09
corner
IJ
IJ
IJ
223
;-
T7
270mm
yT8
275mrnf
6ttrmm
XT9
X-
T10
T4X X <T67
T3
XT5
XT66
11 0 9
30 Tube No.
Dimensions (mm)
O/D
Bore
Notes
AA
BB
HEADER
TUBE
od
od
od
od
id
id
id
id
t
t
t
t
Element 9
UnitB1,B3&B4
Tubes 3-34
A)
or
B)
or
TUBE INTERSECTION
T,STEAM
= (T3 + T4 + 66 + 67)/4
=
C1*T10-C2*r 57ttw + C3 -C4
C1*T9
-C2*T^rC2
-C4
=
OUTLET STUB INTERSECTION
T,STEAM
= (T3 + T4 + T66 + T67)/4
= Cl*T9-C2*TsmiM + C3 -C4
= C1*T8-C2*r(STEAM c: C4
Fig. 9: Example stress functions generated for a critical header ligament
(The numbers have been changed for confidentiality purposes)
224
225
READ IN TEMPERATURE
TIME AND ELASTIC
STRESSES
FATIGUE
PAIR COMPRESSIVE
& TENSILE PEAKS IN
DESCENDING ORDER
OF SIZE
CREEP
DETERMINE THE RELAXED
STRESS DURING THE
PEAKS TAKING INTO
ACCOUNT DWELLS
DETERMINE ELASTIC /
PLASTIC CYCLIC
LOOPS USING
NEUBER METHOD
CALCULATE CREEP
DAMAGE USING A
DUCTILITY EXHAUSTION
METHOD
CALCULATE ELASTIC/
PLASTIC STRAIN
RANGE
CALCULATE FATIGUE
DAMAGE FOR EACH j
SET OF PEAKS
SUM DAMAGE
Fig.11 :
227
CHAPTER 4
SPECIFIC APPLICATIONS
229
ABSTRACT
The release of toxic gases into the atmosphere, mainly because of acid rain has been object of
many discussions in all the world resulting in international programs of research for the
development of efficient flue gas removal techniques, mainly SO2 and NOx, and in setting more
and more limits of emission. Among the flue gas treatment methods, the process of electron
beam irradiation has shown to be promising. Under irradiation, those gases are simultaneously
removed from the combustion gases. In the presence of ammonia, the byproduct of the
process is ammonium sulfate and ammonium nitrate and after filtration it can be used as a
fertilizer. The process has been investigated in Japan, Germany, USA and Poland. Data
concerning the present state of the process along with the design and implementation of a
laboratory pilot plant for the electron beam flue gases removal process located at
CNEN/SP are presented.
1. EVTRODUCTION
Sulfur oxides are created and exhausted into the air when fossil fuels that contain
sulfur (coal, oil and natural gas) are burned. Nitrogen oxides are formed when the nitrogen
and oxygen are burned with fossil fuels at high temperature. Latter acids are being formed in
the atmosphere and fall to earth as acid rain or snow. In result lakes and forests are being
damaged in certain part of Central Europe, China, Northeastern United States and Eastern
Canada. Some acid can be transported far away from industrialized zones and cross
international borders to ruin the environment in nonurban areas. Trees, crops, and plants may
be hurt. The acid rain affects buildings and monuments what can be seen in many european
cities. These are the reasons why stricter control of SO2 and emissions has become
internationally recognized as a global problem and many countries have set limits for the
discharge of pollutants. SO2 and are listed among them (5).
In the past years, the use of fossil fuels with high sulfur content in Brazilian industrial
installations has grown. In addition, estimates indicate such growing will be continuous. Due
to environmental regulations enacted, the development of a technique able to remove toxic
gases has become essential.
The air pollution in Europe is particularly severe. There exists consequently a strong
need for air pollution technology in order to improve such situation. Poland, which produces
energy mainly from pit and brown coal, is a big producer of these pollutants. Numbers
230
regarding the NO x emission should be multiplied by a factor 2.9 since the nitrogen dioxide
form much stronger acid becoming harmful to the environment (2).
1.1. CONVENTIONAL METHODS FOR S 0 2 AND REMOVAL
Several FGD (Flue Gas Desulphurization) methods have been developed up to now.
The methods can be divided into several cathegories: dry, wet and with sulfur recovery
system.
Dry Scrabbers
LSD - Lime Spray Dryer
CFB - Lurgi Circulating Fluid Bed
FSI - Furnance Sorbent Injection
EI - Economizer Injection
DSI - Duct Sorbent Injection
DSD - Duct Spray Drying
ADV - Moist Dust Injection
LSFO - Limstone with Forced Oxidation
Wet Scrubbers
LFSO - Limstone with Forced Oxidation
LSWB - Limstone with Wallboard Gypsum
LSINH - Limestone with Inhibited Oxidation
LSDBA - Limestone with Dibase Acids
PURE - Pure Air/Mitsubishi
MGL - Magnesium Enhaced Lime
LDA - Lime Dual Alkali
LSDA - Limestone Dual Alkali
Sulfur Recovery System
WLWN - Wellman Lord
ISPRA - ISPRA - Bromines
MgOx - Magnesium Oxide
LSFO - Limestone with Forced Oxidation
Dry and wet methods can be applied for reduction of NOx pollutants. SCR selective
catalytic reduction, precipitation on solids, catalytic decomposition on solid electrolyte and
reduction to Nz by NH3 are examples of dry scrubbers. Absorption in liquid with reduction to
NH4, adsorption in liquid with oxidation NO2, NO3 are used in the wet method.
The stricter control of NO x and SO2 pollutants, which are being forced in many
countries, provokes an impact in the development of low cost NO x /SO x control technology as
alternatives to existing ones: SCR (Selective Catalytic Reduction) for NO x and FED (Flue gas
desulphuration) for SO2 control. The evaluation of nearly 70 processes has been done under
the EPRI project to select the most promising technology (9).
condition based on:
commercial use);
be developed);
231
232
Where the number in parentheses represent the G values of the species and the G is the
number of molecules produced per lOOeV of energy absorbed in the system. This is the first
stage of the process.
During the second stage radicals and atoms containing the oxigen react with SO2 and
to form, in the presence of water, sulphuric and nitric acids. There is also the ionmolecule reaction mechanism for the decay of the primary species. Low concentration
components have to compete with the primary radical decay processes. Above 760 reactions
were listed in Agate Code to describe the undergone processes. Some reactions from the
secondary stage, wherein SO2 and NO x are involved, are listed below (4):
233
S02 + H02
> SO3 + OH
S 0 2 + OH
> HSO3
502 + O
503 + H 2 0
> SO3
> H2S04
NO + 0 H
> HNO2
NO2 + O3
> NHO3 + O2
N02 + H02
> HN02 + 0 2
NO2 + OH
> HNO3
Most than 20% of the NO is converted into free N2 being released in the EB process
in the presence of ammonia according to JAERI and KFK's tests.The last stage is the product
formation. Finally, the gas conversion process is initiated by the reaction of sulphuric and nitric
acids in the presence of water and stoichiometric amount of ammonia. These acids are
converted into ammonium sulphate and ammonium nitrate and are collected by a filtering
system (4).
The efficiency of the EB process was determined in many experimental facilities to
optimize process conditions. Last data show that 95% of S 0 2 removal efficiency can be
obtained at a 5kGy dose being the water content and the thermal reaction condition properly
optimized. The multistage irradiation can significantly improve the NO x removal. The 7kGy
dose for the two stages and the 6kGy dose for a three stage irradiation is required for a 80 %
efficient removal (5).
2.2. ELECTRON BEAM FACILITY FOR FLUE GAS TREATMENT
The first experimental faculty for EB process applied to flue gas treatment was built by
the Ebara Corp. in Japan. The batch tests where carried out in the 1970-71 period. The
experiments proved that SO2 and NO x can be removed from irradiated flue gas in results of
radiation chemical reactions. Subsequent development of the process has been continued by
Ebara, JAERI, University of Tokio, NKK in Japan, Ebara, Research Cortrell, Department of
Energy, Electric Power Research Institute, University of Karlsruhe, KFK, Badenwerk in
Germany, Institute of Nuclear Chemistry and Technology, Warsaw Power Station in Poland
(5).
The EB process is being used now to remove other kinds of gas pollutants. The results
obtained from experimental works already underwent proved the capability of the process gas,
traffic tunnel ventilation gas and various VOC pollutants in the gas phase (3,7).
In order to demonstrate the capability of the EB process, four pilot plant
demonstration facilities are being now used both in Poland and Japan. They are based on the
Ebara process where ammonia is injected before the process vessel wherein the flue gas is
irradiated (5).
The Table 1 shows the parameters of the pilot plants for the flue gas treatment which
have been installed since 1991 and are being used now to demonstrate the capability of the EB
technology for commercial use (5).
234
In 1991, a 3year 14.3 million USD project was initiated in Japan by the Ebara Corp.
together with the Japan Atomic Energy Research Institute (JAERI, Takasaici) and the Chubu
Electric Power Company (Nagoya). The main objectives of the research carried out at this
pilot plant are as follows:
VOLUME
INSTAL.
FLOW RATE
TEMP
(NM3/h) :|SO 2 /NO X
(C)
| (ppm)
1
1
1991
20.000
INSTITUTION
INCT/KAWENCZYN
POLAND
EBARA / JAERI
JAPAN
YEAR OF
1992
12.000
1
1
ACCELER.
1
|200/600
1
60
to
80
250
1
1
|800 to
1000 or
65
800 KeV
3 36 KW |
|l50/300
1
EBARA / TOKYO
JAPAN
| ____
1992
50.000
1
|
500 KeV
20
2 12.5KW
0-5
1
1
1
|
loo
NKK / JAERI
MATSUDO-JAPAN
500 - 700
KeV
2 X 50 kW |
1992
100
1000
HCl =
| 1000
1
400 - 350 |
KeV
150
15 KW
I
To confirm capability of the EB method in low NO x content gas, a Tokyo plant was
built by the Ebara Corp. and the Tokyo Metropolitan Government to treat ventilation
exhausted gases from a highway at the Tokyo Bay Tunnel. The facility was finished in June
1992. The main parameters of the pilot plant are shown in Table. 1. 50.000Nm^/h of gas from
the ventilation exhauster is introduced into the irradiation vessel for EB treatment with the
ammonia presence. As a result NO x is converted into powdery ammonia nitrate products. The
activated carbon is used to remove the ozone formed by the irradiation. A 80% targed removal
efficiency is being obtained at 3ppm level of NO x in inlet parts.
To evaluate the EB process applied to the flue gas from municipal waste
incinerators a pilot plant was built by NKK, JAERI and Matsudo City Government Clean
Center. The plant was completed in June 1992. The main parameters of the plant are shown in
Table 1. Targets of the removal efficiences are as follows:
NOx:
S02
lOOppm
lOOppm
> <50ppm
> < lOppm
235
HCl:
lOOOppm
The irradiation is being done where the slurry of calcium hydroxide is sprayed at a
temperature higher than 150C. The bag filter is used to collect powdery products (mixture of
calcium nitrate, sulfate and chloride) formed by the irradiation. During the process HCl and
SO2 are removed by spraying the slurry of Ca(OH)2 NO x is effectively removed by EB
irradiation (6).
The Polish Pilot Plant, with a 20000Nm3/h capacity, has been built at EPS
Kawenczyn in Warsaw. The installation was constructed on the by pass of the main stream of
the flue gas with total flow net 260000Nm3/h from the WP-120 boiler (nominal heat output
120Gcal/h, efficiency 84%, coal consumption 26-32 t/h). The black coal used contains 1.2%
sulphur, 18% ash content and a calorific value of 4700 Kcal/kg.
The Polish Pilot Plant is the first installation in which two stage irradiation by electron
beam was applied resulting in a significant decrease of energy consumption. The other
novelties of this construction are connected with the process vessel where irradiation zones
are located along the flue gas system flow and a double window construction was applied with
perpendicular streams of air for cooling the output windows at the accelerators and the inlet
windows of the process vessel.
The main objectives of the research carried out at the pilot plant are (2):
- Testing of all parts of the installation under industrial conditions;
- Optimizing of the process parameters leading to the reduction of energy
consumption with high efficiency of SO2 and removal;
- Selecting and testing filter devices and filtration process;
- Developing of the monitoring and control systems at industrial plant for flue
gas cleaning;
- Preparation of the design for an industrial scale faculty.
2.3. PRESENT STATUS OF ELECTRON BEAM PROCESS
The EB process applied to the flue gases treatment is suitable for full scale commercial
application. It was determined by basic experiments and operation of pilot plant facilities. This
is a dry process with a usable byproduct which can offset the operating and investment costs.
The EB technology was recognized as flexible and adaptable with excellent turndown ratios.
The process can be easy controlled for different removal efficiencies and adjusted for the
utilization of different fuels. Major conclusion regarding the EB process for flue gas treatment
are as follows:
- More than 95% of SO2 and 85% of NO x can be simultaneously removed
from the flue gas under optimal operating conditions;
- Ammonia should be injected into the process in near stoichiometric amount,
upstream injection was found to be more efficient;
- SO2 removal efficiency depends on the temperature injection, the filter
condition and the EB dose;
- The quantity of SO2 removed by EB is relatively independent from the inlet
S 0 2 concentration;
- NO x removal occurs almost entirely under EB application and depends
strongly on the dose, gas temperature and ammonia stoichiometry are the second order
effects;
- NO x removal efficiency is increased as the inlet S 0 2 concentration increases.
This occurs as a result of the formation of nitrosulphuric compounds;
- 5kGy is required for 95% of SO2 removal efficiency and 7kGy is required for
80% of NO x removal efficiency in a two stage irradiation facility in optimal conditions;
236
facilities,
- Good reliability of the long time operation was demonstrated in pilot plant
- The byproduct collected during the process consists of ammonium sulfate and
ammonium nitrate which can be effectively used as a fertilizer. The small amount contaminants
does not affect the quality of the product;
- No waste water in the process is being produced;
- Relatively low capital investment and operating cost of the EB process facility
can rate this method as equivalent or preferable to compare with FED/SCR ones;
- Low space requirements produce a significant advantage in the retrofit
installations;
To complete present data of the EB process intense experiments are being done in
Japan, Poland and Germany. The number of the most interesting subjects are listed below:
plant level;
radiation;
- Wet and dry ESP, baghouse, gravel bag filter experimental study to optimize
byproduct collecting system;
- Optimization of the spray cooler construction to obtain dry bottom and
reduction of power consumption;
- Optimization of the systems preventing or removing duct clogging byproduct;
- Duct configuration (rectangular, cylindrical) and gas velocity in duct and
process vessel are investigated;
- Multistage irradiation (two and three zones);
- Ammonia sup and ammonia injecting (location, quantity);
- Byproduct handling studies (granulation, liquid, storage, fertilizer tests).
The Electron Beam process for flue gas treatment could be used beneficially in the
future. Experimental studies describe above improve the technology and promote it for future
applications (2).
3. EQUD7MENT SPECD31CATION
BOILER - Oil or coal fired to produce thermal or electrical energy.
ESP - Electrostatic precipitator to reduce the fly ash content downstream to the boiler.
HEAT EXCHANGER - To reduce inlet or increase outlet gas temperature by
additional stream of air or water.
SPRAY COOLER - Vertically installed down to the boiler and ESP is used to increase
water content in flue gas and describe its temperature by complete evaporation injected water.
AMMONIA INJECTION - To keep stoichiometric quantity of NH3 in flue gas stream.
PROCESS VESSEL - Horizontally mounted with multistage irradiation capability.
ACCELERATOR - To initiate radiation chemical process of flue gas treatment.
ANALYTICAL AND CONTROL SYSTEM - to keep automatic control over the
process.
COLLECTOR - as baghouse/ ESP/gravel bed filter to collect byproduct.
BYPRODUCT HANDLING SYSTEM - To prepare powder, granules or wet sort of
byproduct.
INDUCED DRAFT FAN - To overcome pressure drop in ducts and byproduct
collector.
3.1. GENERAL ARRANGEMENT OF THE TECHNOLOGICAL PROCESS
Flue gas generated by the coal heated boilers enters the EB process after ESP where
the ash content is reduced in order to improve the quality of the fertilizer byproduct. No such
237
filter is foreseen after the oil-fired boiler. The initial concentration of S 0 2 depends on the
sulphur content of the applied fuel. N0 X concentration depends on the combustion process
temperature and is different for different burners and boiler construction.
Heat exchanger is usually used to reduce the gas temperature in the initial cooling
stage up to 150-250C level. Then flue gas enters the spray cooler where the temperature is
reduced to 65-80C by atomized water injection. Usually a dry bottom principle is applied to
operate the spray cooler facility, to eliminate a residual wastewater stream. Water is totally
evaporated by a heat exchange with the hot flue gas once the dew point of the gas is
approximately 50C. Water content in the flue gas should be increased up to 8-12% in this
stage.
Ammonia in stoichiometric quantity is injected before the flue gas enters the process
vessel where it is irradiated by the electron beam to promote the reaction of the ammonia and
flue gas. The beam interacts with nitrogen, oxygen, water and others substances in the flue gas
to produce active free radicals such as OH, O, H 0 2 . In results S 0 2 and N0 X are converted to
sulphuric and nitric acids and finally forms a byproduct consisting of ammonium sulfate and
ammonium nitrate (6).
The ammonium sulfate and the ammonium nitrate are collected by electrostatic
precipitator or bag filters and the cleaned flue gas is released through the fan into the stack.
3.2. MAJOR EQWPMENT
3.2.1. ACCELERATORS
The present estimate of the required dose level for an efficient NO x removal (80%)
shows that the radiation dose should be in the range of lOkGy for low sulphur content coals.
Multistage irradiation can reduce this figure up to 7kGy. It is necessary to remember that 95%
of S 0 2 removal can be obtained with a 5kGy dose. Significant improvement in NO x removal
can be achieved when high sulphur coal is applied. If it is assumed that gas absorbs 85% of the
total beam energy then 1MW accelerator facility will be sufficient for a 100MW generator
with the dose range described above.
The required beam power level is significantly higher than in those accelerators utilized
for industrial beam processing but there are technical prospects to build accelerators with a
200-500kW unit power what sharply reduces the number of accelerators in industrial facultes
and their cost.
According to accelerator producers the cost of high power 800keV machines is in the
range of 5 US$/W at present. The new developments which are under progress in USA
(induction linac) give some prospect to reduce the cost level by factor 2.
Many factors should be considered when specifying the location of the
accelerator/scanner relative to the process vessel. The most important are: dose uniformity,
cost and easy access to maintenance. The best position of the scanner was found to be at the
top of the process vessel with the irradiation zones along the gas stream flow. The multistage
irradiation is recommended to increase the process efficiency (10).
The process vessel location in horizontal position and at the underground level can
reduce shielding costs and allows to have an easy access and change of certain components of
scanner/process vessel systems.
The Table 2 shows the basic electron beam parameters which have been applied in
laboratory and pilot plant facilities for flue gas treatment.
238
The Table 3 shows producers and accelerators which are suitable for flue gas treatment
in capacity 10.000 20.000 Nm 3 /h (10).
3.2.2. FILTERS, BYPRODUCT HANDLING
The process of particles formation and filtration has been intensively investigated
during the recent years. The mass median aerodynamic diameter of the product aerosol
facilities around l u m depend on the dose and flue gas parameters (4).
A baghouse was initially selected as a byproduct collector. A precoating
system was used to protect the bag' surface from direct contact with hydroscopic byproduct.
To avoid decreasing property of byproduct by neutral precaution material diatomaceous earth
can be used.
TABLE 2. The basic parameters of the electron accelerators applied in
faculties for flue gas treatment.
I
TYPE
FACILITY
LABORATORY
j
FACILITY
< 1000 Nm 3 /h
ENERGY
(MeV)
BEAM
POWER
(KW)
12
3
1.2
1.5
0.22
0.3
0.7
1. 2
15
1. 2
30
22
3. 6
5
OF
PILOT
DEMONSTRATION
FACILITY
1000 - 20.000
TYPE OF
REMARKS
ACCELERATOR
linear
Cockrft-Walton
Dynamiton
Transformer
II
Resonance
0.75
0.75
0.8
0.8
0.3
0.5
0.5
0.7
0.8
0.5
30
45
40
80
90 Electrocurtain
15
Cockrft-Walton
II
15
2 50
Transformer
3 36 Cockrft-Walton
II
2 L2.5
0.8
1.0
8 L50
4 400
2
2
2
2
Ebara, Japani
JAERI, Japan|
Tokyo Univ.
JAERI, Japan|
Karlsh.,Germ|
KFK, G ermanyj
INCT, Poland|
Ebara, Japan|
Ebara, Japan|
Res. Cott.Usj
Ebara, USA
Badenwerk, GE |
KFK, G ermanyj
Ebara, Japan|
INCT, Poland
Ebara,Japan |
Ebara, Japan|
(Nm3/h)
INDUSTRIAL
PLANT
300.000 Nm 3 /h
Transformer
Induction linear
1
To remove byproduct deposition from the bag filter and reduce baghouse pressure
drops several methods can be applied.
Pulse jet cleaning
Reverse flow cleaning
Mechanical shaking
Acryllic and Teflon covered bags are the best in this application.lt was found that other
methods can be effectively used in the collection process. Wet and dry ESP and gravel bed
filters are being used to optimize byproduct collecting system.
239
ESP and baghouse can be installed in series to increase the efficiency of the byproduct
collection, but at a significantly higher cost of installation.
The usable byproduct is one of the major features of EB process for flue gas treatment.
The concentration of ammonium sulfate and ammonium nitrate depends on the fuel
composition, but its quality was estimated on 75% of the regular product. The sale of this
byproduct can be used to offset the cost of the ammonia which is applied in the process. Such
sale can significantly decrease operating costs.
TABLE 3. The basic parameters of the electron accelerators offered by the
different producers for flue gas treatment in the capacity 1000020000 Nm^/h
1
TYPE OF
ACCELERATOR
600/200/1830,
Dynamitron
1
I
I
|PRODUCER
|ELECTRON |BEAM
G
ENER Y
CURRENT
j (keV)
(mA)
Radiation
Dynamics,
USA/Japan
ESI 0.3/90
Energy Seien.
Electrocurtain Corp.,
USA/Japan
500/600
300
ELW3A
Transformer
Inst, of Nucl.|
jPhys.,
500/700
|Russia/Japan
UW-075-2-2-W,
Transformer
NIIEFA,
|Russia
750
Nissin High
|Volt., Japan
1
1
1 Polimer
Physics,
|Germany
500
EPS-500
Cascade
1
1
|ESH,
Transformer
1
200
1830
300
1400
100
1500
2 60
80
I
I
280
| 2000
j 1600
I
I
220
I
I
OUTPUT
WINDOW
(mm)
I
I
I
I
700
I
Ammonium nitrate is the basic fertilizer for many plants. Ammonium sulfate is being
applied directly on certain sulphurdepleting agricultural crops like corn and cotton. The
combination of these two compounds provides a suitable quality material for direct
application.
Ammonium sulfate is required by sulphur defficient lands, generally located in the more
arid regions of the world. Existing ammonium sulfate sources do not meet market needs. Such
lack translates into an excellent opportunity to sell EB process byproduct at an attractive
price. Usually ammonium sulfate is a component of the final commercial product of the NPK
fertilizer.
An alternative application of the EB process byproduct is under consideration.
Enriching various organic compounds like sludge or municipal waste compost with a
byproduct addition may improve the nitrogen content, may adjust the precipitation of the
mixture and may be effective and economically replace the chemical fertilizer.
240
Depending on the coal sulphur content and the level of nitrogen oxides in the flue gas,
the nitrogen content of the byproduct mixture will be between 20 30%. For those facilities
using 2.5% sulphur coal the byproduct production can be estimated on 800 Kg/day/MWe.
With a nitrogen content of approximately 25%, the flyash is one of the significant compound
of the byproduct. Usually it is efficiently removed by the ESP located before the process
vessel. Presently, the flyash is not recognized as a hazardous waste material, but the high
flyash content in the byproduct decreases the nitrogen content and increases the distribution
and application costs per nitrogen unit (4).
Some trace of heavy metals are present in the flyash. Table 4 shows a record for two
different byproduct samples. The byproduct was collected at the installation operated by
Badenwork, Karlsruhe, Germany. Product A was a mixture byproduct with filtration, while
product is a pure EB process byproduct having the characteristics of a nitrogenous fertilizer,
with properties and fertilizing utility similar to that of the ammonium sulfate. Usually the
amounts of trace metal in the byproduct can be controlled at levels equal to or less to those
being found in commercial fertilizers. Typically no more than 10%, by weight, of flyash by
byproduct is accepted. This level of preremoval can be easily obtained by the use of relatively
low efficient collectors.
TABLE 4. Composition and chemical properties of tested products.
I
"B" Product
"A" Product
I1
| total
N-NE 4
-3
?25
| CaO
MgO
1 Na 2
CI
S total
S-SO4
1 PH
23
Fe 2 0 3
sio 2
ash
content of heavy metals (ppm)
j Mn
19.50
19.40
0.74
0.21
0.07
0.50
0.46
0.04
1.40
25.50
24.50
4.50
99.50
16.10
1.27
43.31
77.89
0.53
0.11
0.89
2.51
160
60
38
26
4
Zn
Cu
Pb
j Cd
1er
4.45
4.16
0.75
1.12
1.21
3.95
2.74
0.57
2.90
3.95
3.34
7.35
98.20
10
60.0
254.0
3.0
26.0
3.0
3.6
241
2-5 US$/W of beam power depending on the accelerator construction and its producer. Table
5 shows capital cost estimate depending on the cost of the accelerator. Up to 25% of
thecapital cost is applied to buy accelerators what is slightly less than the typical cost of a
construction work (buildings, ducts) (4).
According to an Ebara estimate to a 100 MW plant burning 2% sulfur coal and SO2
removal rate 92% and the NO x removal rate 60% listed below, performance and economic
parameter can be achieved:
Power consumption
Ammonia requirements
Inert earth
Fertilizer byproduct
SO2 reduction
NO x reduction
Flue gas - flow rate
Total capital cost
Process cost
Operating personnel
Annual maintenance cost
- Annual operating cost
2.6MW/h
1500 kg/h
100 kg/h
600 kg/h
1400>112ppm
400
>160 ppm
300.000 Nm3/h
19.300.000 US$
193 US$/kW
3 per 24 h
200.000 US$
580.000 US$
It was recognized that the byproduct has 75% of the value of a commercial fertilizer
what meant 51 US$/t inl990.
TABLE 5. Estimate of the capital cost EB faculty for flue gas treatment
depending on the cost of the accelerator.
1
Accelerator
Cost
(USD/W)
beam power
Investment
Cost
(USD/KWe)
1
Multistage
Irradiation
Investment Cost |
(USD/KWe)
0,75
225
169
350
262
1
242
DEVICE depends mainly on finantial conditions or the possibility of adaptation of the existing
faculties. The highest flow rate can be obtained in a system equiped with a boiler.
ANALYTICAL EQUIPMENT should allow to measure number of process
parameters:
- Inlet and outlet SO2, NO x , O3, H2O, NH3 concentration;
- Dose rate;
- NH3, SO2, NOx injection flow rate;
- Flue gas flow rate,
- Temperature in determined points of then facility;
- Aerosol parameters.
ACCELERATOR is used to provide stream of electrons which is applied in the
process. Electron beam parameters are not critical in laboratory installations due to
experimental requirements. The energy of an electron may range from 0.22 to 12 MeV while
the energy beam power from 1.2-30 kW in laboratory installations which have been used to
investigate the EB process.
PROCESS VESSEL should stand a long time irradiation with appropriate temperature
according to the nature of the experimental condition. Stainless steel and other corrosion
resistant materials are preferable. Thermal isolation and additional heating system could be
used to stabilize experimental conditions.
HEATING EQUIPMENT is required to provide proper temperature conditions to the
process vessel and analyze gas paths. Process vessel temperature 60 - 100C is being used for
various experiments. The temperature of the gas paths is recommended by analytical
instrument producers and usually is into the 150C range.
RETENTION CHAMBER located downstream of the process vessel is sometimes
used to stimulate the product formation.
PREFBLTER is sometimes used after the burner to stop particles coming from the
combustion process.
HEAT EXCHANGER is sometimes used before the process vessel to control the
temperature of the EB process.
SPRAY COOLER is used for the water injection from an air-assisted manifold of
spray nozzles. The quantity of water injected is under control to cover temperature of the flue
gas by the evaporation process and increase its relative humidity.
AMMONIA INJECTION is supplied from the pressure tank after conversion from
liquid to gas phase. The amount of injected ammonia should be carefully controlled according
to experimental requirements. The injection point is usually located before the process vessel.
COLLECTOR of the product is being used to collect the final product. Bag filters
and/or ESP may be applied
FAN located before the stack is necessary to keep proper flow rate of the gas through
theprocess vessel and the collector of the product.
STACK and duct ne are used to extract the flue gas out of the building. A corrosion
effect and the deposition of the byproduct may occur when filter collector units are not
applied.
243
A laboratory pilot plant has been built at IPENCNEN/SP, using an electron beam
accelerator, from Radiation Dynamics Inc., having the following parameters (8):
.Electron energy
Beam current
Scan length
Scan
frequency
The irradiation device allows a fourturn irradiation and was already used for
dosimetric studies (1). The gas flow rate will be 251/min and a synthetic mixture of S0 2 and
will be used in preliminary studies. The carrier gas will be normal cooking gas, that is
burned at a proper burner. NH3 will also be injected and the fertilizer will be collected at a bag
filter. Several points will allow the measurement and control of gas rate, temperature and
humidity and also the analysis of the gases to calculate the efficiency of their removal.
4. REFERENCE
(1) CAMPOS, CA.; PEREZ, H E B . ; VIEIRA, J.M.; POLI, D.C.R.; SOMESSARI, S.L.;
ALBANO, G.D.C. Desenvolvimento de um sistema calorimtrico para dosimetria de
gases, em fluxo contnuo, irradiados com feixe de eltrons. Anais do V Congresso Geral de
Energia Nuclear, 2, 659661, Rio de JaneiroBr, 1994.
(2) CHMOLELEWSKI, AG.; TELER, .; , Z A ; LICKI, J. Laboratorium and
industriai research installation for electron beam flue gas treatment. Proceedings of an
International Symposium of Isotopes and Radiation in Conservation of the Environment,
Karlsuhe, 913 March, 1992, IAEAJM325/124, p.81, 1992.
(3) DOI, T.; OSADA Y ; MORISHIGE, A ; TOKUNAGA O.; MIYATA T., HJROTA .;
NAKATvLA, M ; MIYAJfMA K ; BABA S. Pilot plant for NO S 0 2 and HCl removal
from flue gas of municipal waste incinerator by electron beam irradiation. Radit. Phys.
Chem., 42, 679682, 1993.
(4) EBARA ENVIRONMENTAL CORPORATION. Final report for testing conduced on
the Ebara flue gas treatment system process demonstration unit at Indianapolis. Indiana,
Greenburg, PA 1988.
(5) INTERNATIONAL ATOMIC ENERGY AGENCY. Eletron beam processing of
combustion flue gases. Vienna, IAEATECDOC428, 1987.
(6) NAMBA H ; TOKUNAGA O.; SATO, S., KATO, Y. TANAKA, T., OGURA, Y.;
AOKI,, S.; SUSUKI, R. Electron beam treatment of coal-fired flue gas. Proceedings of the
Third International Synposium on Advanced Nuclear Energy Jaeri, Tokyo, Japan, 118122.
INISJP005, 1991.
(7) PAUR, HR , MAETZING, H Electron beam induced purification of dilute off gases
from industrial process and automobile tunnels Radit Phys. Chem.,42, 719722, 1993
(8) POLI, D.C.R; VIEIRA, J.M.; RIVELLI, V.; LAROCA M A M . Estudo sobre o
tratamento de gases txicos S02 e NOx provenientes de combusto de leo ou carvo
244
245
246
Generalization: a neural net is able to answer to patterns to which they have not been
trained. In order to achieve this, an adequate choice of training examples that attend the
range of interest is required;
Fast response: once trained, the response of an ANN is very fast and can be used even for
real time applications;
Universal approximator:
function;
Natural noise elimination: this capability is due to the constructive characteristics of the net
itself;
Minimal knowledge of the process: neural controllers require a minimal knowledge about
the mathematical model of the process. Intrinsic characteristics of the process are
automatically considered;
Seen under this aspects, neural nets facilitate the controller design task, since it is not
based upon the classical ways to develop controllers but, instead, is based on a training process,
which is started before the realization of the controller itself. For this purpose we only need to
know the input / output relations of the process to be controlled, that means, we need a previous
knowledge of some values that define a set ofinputs end their respective outputs (this is necessary
for the so called supervised learning). Each pair ofinputs / outputs correspond to a training pair
or test vector, and the set oftest vectors correspond to a test pattern. The amount oftest vectors
needed in each pattern is proportional to the complexity of the process to be controlled. Practical
experiences have shown that this amount should be around 20 to 60 vectors.
The inspiration for the creation of the so called Artificial Neural Networks (or ANN)
comes from biological models and goes back to the 1940'es. In spite of that, only in the last
decades the interest for these connectionist models has grown in a solid base, due to a relatively
better understanding of the real neural systems and the improvements in the computer technology.
The development of new neuronal models and new training algorithms, beside the availability of
faster processors, contributed to popularize the use of ANNs to the most different applications,
including control, signal processing, pattern recognition (along with voice, image and text
recognition), event prediction, fault detection and diagnosis, among many others.
ANNs can be seen as estimators without a model, because they are universal
approximators of general functions, which permit the mapping of input vectors in output vectors
without the need of a mathematical model.
2 - FUNDAMENTAL CONCEPTS OF NEURAL NETS
The ANNs are composed of elements that perform some of the elementary functions of the
biological neuron. Beside their superficial similarity to the brain's structure, these nets exhibit some
characteristics of the human brain, like, for instance, the capability of learning by experience.
247
ANNs can modify their behavior as an answer to its environment. This fact, more than any
other, is responsible for the interest they are receiving. It is said then that a neural net can learn,
requiring for that a variety of training algorithms, each one with its strengths and weaknesses. For
each particular application, an appropriate model and learning algorithm should be chosen.
The ability of learning by training brings to the net a certain degree of unpredictability, that
is, the result will be so near the expected as better the training process was. This depends
fundamentally on a good choice of test patterns, that represent the problem to be solved in a
satisfactory way, and on the choice of an efficient learning algorithm. As a consequence, we
always have a certain error and a certain probability of correctness associated to a net output.
A net is trained so that the application of a set of inputs produces a desirable (or at least
consistent) set of outputs. The training is done through the sequential application of input vectors
while the weights of the net are adjusted according to a pre-defined procedure. The training can be
supervised or not supervised. The supervised training requires an input vector associated to a
desired output vector (the training pair). In the non-supervised learning, the training set consists
only of input vectors. In this case, the net weights are adjusted in order to produce a consistent
output vector, that is, the application of any input vector sufficiently similar to one of the training
vectors will produce the same output.
In the applications related to control, the multilayered neural networks are awaking the
greatest interest of the researchers.
In this nets, the neurons are totally interconnected with neurons of an adjacent layer. In this
kind of topology, the nets are composed of an input layer, one or more hidden layers and one
output layer. The neurons of the hidden layers perform the modeling of non linear functions and
serve also as noise and drift suppressers. The training of the net consists of adjusting the weights
of the various layers.
Neural nets can be used to control complex and nonlinear systems. They have high noise
immunity and can be used to implement adaptive controllers.
Adaptive control techniques have been developed basically for processes that work under
unexpected or hardly predictable conditions, which are difficult to include in the models.
The classical adaptive control techniques fail every time we don't have a complete
knowledge of the mathematical model of the process or when we don't take in consideration some
uncertainties or complexities of the system (what is the case of most practical applications). The
use of neural controllers is interesting precisely in this cases.
2.1
McCulloch e Pitts [4] proposed a simplified model for the biological neuron. Their model
is based on the fact that, in a certain moment of time, the neuron is either firing or inactive, what
gives it a discrete and binary behavior.
248
There are excitatory and inhibitory connections in these neurons, represented trough a
weight with a signal, which reinforces or hampers the generation of one output impulse. One
neuron n produces one impulse, that is, one output o - 1 if and only if the sum of the inputs is
bigger or equal a certain threshold. Equation 1 defines the output function of the McCulloch &
Pitts neuron:
Oi (x)\n
if - ij Vu - a s o
.
0)
where WJJ is the weight of the connection associated to the input ijj,
threshold of the neuron n.
Starting from the model proposed by McCulloch & Pitts, many other models that permit
the production of any output, not necessarily 0 or 1, have been derived. Also many different
definitions of the activation function appeared. Figure 1 shows four if these activation functions,
namely: the linear function, the ramp function, the step function and the sigmoidal function.
ft
(b)
(a)
f(x)
SM
(d)
(c)
Figure (1)
The sigmoidal function, also known as Sshape function, illustrated in figure l.d, is a
semilinear function, limited and monotonic. It is possible to define many sigmoidal functions. One
of the most important sigmoidal functions is the logistic function, defined on equation 2.
&(*) =
l+
(2)
249
x2
x3
Figure (2)
Basically, the neuron corresponds to a weighted sum of the inputs, over which the
activation function is applied. In this work, the sigmoidal activation function presented in equation
2 has been used.
3 - IMPLEMENTATION OF THE NEURAL CONTROL ENVIRONMENT
The environment we describe here works with multilayered neural nets to perform adaptive
control of complex processes, which can contain nonlinearities.
3.1 - TRAINING ALGORITHMS
In this work, two distinct training algorithms have been used: genetic algorithms and backpropagation.
3.1.1 - Genetic Algorithms
Genetic algorithms can be seen as generic algorithms for optimal general purpose solution
search. Their working mechanisms are similar to the mechanisms that rule the evolution of living
being populations.
In this algorithm we generate initially "n" sets of weights and each set is called a
chromosome. The set of test vectors used for training is than applied to the net using each
chromosome. For each chromosome, the resulting average quadratic error is stored. The first
action over the chromosomes is the so called elitization, in which the 25% worst chromosomes
are eliminated and the 25% better chromosomes are duplicated. The total amount of chromosomes
keeps the same, because the remaining 50% are kept unchanged.
250
In this work, the phase called crossover has been eliminated, for it does not amplify in
significant way the search space and has a high associated computational cost. This phase would
correspond to the exchange of weight values from their position in the same chromosome. The
position choice in made in randomic way. There is no rule to define the number of changes to be
executed.
The next phase is called mutation and consists in substituting, also in randomic form, some
of the values inside the matrix composed of all chromosomes. The mutation rate is an arbitrary
parameter. This process inserts new information in the population, what is desirable, because there
is no warranty that the solution is inside the universe of weights being considered. Because it is
randomic, the mutation can also destroy a good chromosome before it can be duplicated. The
practical work has shown that a too high mutation rate causes oscillations in the error values.
Once completed the mutation phase, the process of elitization / mutation is started over
again and again, until the expected error value or the specified number of iterations is reached.
In the environment we describe here, the alteration of the weights can be done in a
"controlled" or in "elitized" way. This permits us to by-pass in an effective way the biggest
difficulty associated with this algorithm, which is the divergence of the error when it reaches
relatively small values, mainly when the mutation rate is high. In the "controlled" form, the
alteration of weights is only executed if the new value presents a smaller quadratic average error
as the former value. In the "elitized" form, the alteration of weights happens only over the "bad"
chromosomes. In both situations the work of the user is facilitated, since a way to control the non
convergence problem is given. The main advantage of using the convergence control process is the
automatic search of a solution without the need of continuous supervision from the user.
3.1.2 - Backpropagation Algorithm
This method follows an iterative model, which goals the reduction of the quadratic average
error between the desired and obtained output values for each training pair (supervised learning).
The error found is then back-propagated from the output to the input and the weights of each
network layer are readjusted according to a well defined rule.
During the training process, a factor called learning rate is adjusted, which determines the
speed and stability of the convergence. The environment described here executes this adjustment
in automatic form.
3.1.3 - Coexistence of the algorithms
The backpropagation training algorithm, although largely used, presents certain
drawbacks, for the solution could not converge to a desired minimum if the solution space is too
convoluted. This comes from the fact that this algorithm is highly affected by local minima, which
can delay or even stop the process of getting a global optimal solution. This problem can be bypassed by using the genetic algorithm, specially in the initial phase of the training. The genetic
algorithm is not affected by the local minima because it does't include minimization of gradient.
However, when the error becomes relatively small, the convergence of the genetic algorithm
becomes critical and slow. There is no error evolution granularity defined in the genetic algorithm.
251
What is meant is that the search for an optimal solution in this algorithm is done in a non
continuous form, that is, there can be a period in which the error value practically doesn't improve
and then, in the next iteration, the error can "sink" abruptly to the desired value, ending the
training phase. For all those reasons, it is interesting to make a composition of algorithms, where
the genetic algorithm is used at the beginning in order to initialize the weights of the net and then
the backpropagation is used to get the desired, more generic, solution.
One way to partially by-pass the problem of local minima in the backpropagation algorithm
is to induce a controlled randomization of the weight values every time they meet a local minimum
and the convergence stops. This solution is, in the most cases, enough to solve non complex
problems and has the advantage of being simpler to implement than genetic algorithms. Otherwise,
this solution offers no warranty that we are not going to fall in another local minimum, in which
case we need to start the whole randomization process again. In the environment we describe
here, a maximal number of automatic randomizations has been defined and the training process can
be interrupted after some steps if an optimal solution has not been reached.
The number of iterations needed for each algorithm to reach their best solution is sensibly
smaller in the genetic algorithm. However, the computational effort required by this algorithm is
larger. In this form, a compromise between the two algorithms must be established. The
environment permits the user to define the number of iterations after which a migration from one
algorithm to the other will occur.
3.2 - PRE-TRATNTNG
A practical problem one meets in the implementation of a neural controller which is going
to control a real process is how to initialize it before connecting it to the process. In order to avoid
unexpected behavior of the system (controller + process), the neural controller must first learn the
real dynamics of the process (observe that the net starts with randomized weights, where the input
/ output relation is unknown and aleatory).
The solution we adopted was to generate initially a set of test vectors which are obtained
through an assay of the process in open loop and considering that the controller is in a rest
condition or steady-state. Knowing the processes input values and the corresponding relation with
the output values, we can create a set of vectors, which we call "true-vectors", and use them in a
"pre-training" phase of the neural controller. Through these vectors we can automatically make
inferences about the system's response time to an external disturbance, obtaining in this way a
parameter that is similar to the behavior of the derivator in a classical ) controller.
The "true-vectors" are also used in the adaptive training phase, working as a kind of
"anchor" that hinders undesirable behavior caused by the introduction of bad test vectors. This can
occur due to an inadequate choice of the new test vectors dynamic acquisition method.
Due to the form in which the "true-vectors" are acquired, all the imperfections of the
process are automatically considered, including, for instance, transducer inaccuracies,
nonlinearities in the actuators or in the process itself.
252
If it's not possible to execute the assay in open loop, the true-vectors will need to be edited
directly by an expert. Once implemented the preliminary controller, it's performance can be
improved subsequently, trough an on-line acquisition of new test vectors.
3.3 - OFF-LINE TRAINING AND ON-LINE ACQUISITION
The integrated neural control environment has all the necessary tools for the automatic online test vector acquisition. This enables the automatic consideration of all the dynamic
characteristics of the process and the implementation of adaptive controllers.
A quite significant point in this environment is the possibility of performing an off-line
training and an on-line test vector acquisition. This ability of the environment enables to give the
controller an adaptive capability. For instance, if a transducer coupled to the system starts to
present an error in it's output after a certain time of good operation and after the implementation
of the neural controller, it would be desirable that the controller recognizes this change and adjusts
itself in order to compensate the error. In order to accomplish this, the environment should read
new vectors in regular time intervals and use them to re-train the net. However, we can't
increment the number of test vectors indiscriminately. One of the reasons is the limitation of the
computer's memory and other is the consequent increase of the training time. Neither can we
detect which vector should be altered, for it belonged to a former set of vectors, defined before the
transducer changed its behavior. The solution of changing all the vectors should also not be used
indiscriminately, for it would be as if the net had lost its memory.
The environment offers a set of options that, when correctly combined, make it possible to
by-pass all these problems.
In the environment, it is possible to set and vary the immediate reference value in order to
facilitate the on-line vector acquisition. The reference value of the neural controller can vary in a
manual way, with pre- and post-triggering, or automatically, with pre-defined boundaries and
controlled number of repetitions. The reference can also be programmed to follow a set of predefined values from a vector with controlled number of repetitions. These repetitions enable to
establish a flat stretch of values, which is very significant due to the inherent delays associated to
the process dynamics.
The alteration of the test vectors can be executed in blocks or totally, enabling to establish
a smoothness criterion of adaptation.
It's not interesting to have equal or very similar test vectors. During the process of
acquisition of new test vectors, the environment can be programmed to filter out test vectors that
are similar to the ones we already have. The similarity criterion can be adjusted by the user.
Case the controller stays a long time in the same operation point or region, it could happen
that it "overlearns" this point or region and "forgets" the expected behavior in the others, due to
the continuous acquisition of new test vectors in this place. In order to solve this problem, an
option has been implemented in the environment that only enables the substitution of a test vector
case the present reference value is in the neighborhood of the reference value of some test vector.
253
Another option implemented in the environment is concerned with the condition of starting
a new training cycle based on the newly acquired test vectors. This process can be activated by the
user or trough an external trigger. The external trigger is concerned with a determined variation of
a previously defined input. Case the change in the value of this input overshoots a certain
percentage, the acquisition of new test vectors and the corresponding training process are
automatically started.
When the process under study is very complex or presents many nonlinearities, the neural
net must be proportionally more complex, in order to execute the control in effective way. In the
integrated neural control environment, it is possible to decompose the main problem in many subproblems, with a different neural net responding for each one. That means that we can define a
different neural controller dedicated to each operation region of the process. The advantage of this
strategy is that the dedicated nets are significantly smaller and simpler (and consequently faster)
than the net we would need for the whole range of operation of the process. The switching time
between nets is very short in comparison to the response time of the net itself and is completely
transparent for the rest of the system. In order to accomplish a smooth transition by the switching,
it is possible to execute a superposition of the nets.
The environment enables also the visualization on the screen of all the acquired data, in
order to supervise the training and control tasks.
3.4 - ADOPTED NEURAL CONTROL MODELS
In the specialized literature we can find alternative forms of implementing neural
controller. Some of those make use of the mathematical model of the process and the neural nets
are used only due to its adaptive capabilities.
In the present work we adopted the strategy of implementing a pure neural controller,
where a previous knowledge of the mathematical model of the plant is not required. We assumed
it is enough to have an approximate idea of the order of the involved mathematical model, which
is also not imperative. The knowledge of the model's order helps only to reduce the training time
until a satisfactory solution is found.
Two neural control models have been adopted: the so called direct control and the indirect
control, described hereinafter.
3.4.1 - Direct Control
In this model we have solely the neural controller controlling directly the process.
The main difficulty in the implementation of this method appears in the neural controller
training phase. The problem is how to know what output value should the controller have for each
variation of its input. The error in the process output, related to the reference value, should be
compensated by the neural controller in a similar way as if we were using a classical PID
controller. The systems inner dynamics (delays) should be respected.
254
The desired controller output value for a given input depends not only on the value of the
input itself but also on the preceding state of the system. This means that the input / output
mapping is not a trivial task.
In order to escape from having to know in forehand the mathematical model of the plant
and still be able to extrapolate the controller's instant output value for a given input condition, we
opted by using a factor that changes the present controller output value in the direction to which
the system's error (between reference and process output) points. This factor, which we called
gain, can be linear or exponential and is basically added to the present output value of the net,
obtaining in this way the goal value for the training.
A linear gain basically adds to the present controller's output value a factor that is given by
the output value itself multiplied by the system's instant error and by an adjustment's speed factor.
target = actual_output + actual_output * instant_error * linear_gain_factor
(3)
An exponential gain multiplies or divides the present controller output value by a factor
given by a pre-defined number raised to the system's instant error.
target = actual_output * exponential_gain_factor
mstant
_error
The exponential gain has the characteristic of speeding up the adjustment when the
system's error is large.
3.4.2 - Indirect Control
Another model supported by the environment is the so called indirect control. In this
model, two neural nets are used: one is the identifier net, whose task is to represent the behavior
of the system and is used on the training of the second net, and the other is the controller net,
which plays the role of the process controller itself. The environment provides all the necessary
conditions to implement both nets.
Initially the identifier net is trained, using the environment tools, in order to obtain the
training vectors needed to map to input / output behavior of the process under study. After the
training is finished, the behavior of the identifier net can be compared in real time to the behavior
of the process, checking in this way the results of the training.
The next step is the training of the controller net.
At this point we can train the controller net off-line, that is, without a connection to the
process, or we can accomplish an on-line training, in connection with the process. In the second
case, we have to start the training using the already mentioned "true-vectors".
The first option (off-line) enables a preliminary
influence over the process itself and is recommendable
behavior of the system in closed loop. In this option, a
in order to complete the matrix of values used during
10
255
obtained from these true-vectors trough a list of reference values. This can be seen as an off-line
test vector acquisition.
In the majority of the cases, we can start using directly the second option (on-line), due
mainly to the safety given by the true-vectors. In this case, it is suggested the use of an amount of
true-vectors enough to cover the dynamic band of reference. Practical experiences have shown
that, for the most cases, 20 true-vectors would be enough. In this way it is assured that, at the
beginning of the dynamic vector acquisition process, the controller will respond to the commands
in a way that can't damage the process under control and also permits that variations in the
reference are accepted by the controller. The idea is to start the training using only the truevectors and afterwards improve it by the dynamic acquisition of new vectors.
One important point here is concerned with the adaptive behavior. During the normal
operation, the weight values of the identifier net are not changed, but only the weights of the
controller net. It is usually not necessary to have two adaptive structures in series. The reason for
the existence of the identifier net is to back propagate the system's global error (reference process output) in order to keep it available at the output of the controller net during the training.
4 - TEST OF THE NEURAL CONTROL ENVIRONMENT
In order to test the functionality of the integrated neural control environment, it has been
used to control a pilot plant described hereinafter.
The tests were done using a generator of 127 ACV and 1800 VA driven by a single-phase
motor.
The generator field was controlled trough a hexaphase bridge made of SCRs*. It is
important to point out that, beside the nonlinear behavior of the process itself, the hexaphase
bridge has also a nonlinear behavior, for it responds to the sinus of the applied signal.
The tests were conducted by comparing the performance of the neural controller to a
classical PID controller previously dimensioned for this process (generator + motor).
It has been observed that, after an initial training time, it is possible to achieve a more
continuous control on the extremities of the controlled band with the neural controller. This is due
to the strong non linearity of this process, mainly for low voltage values.
Another observation is that, during the training phase, the indirect model converges more
promptly than the direct control model. In the other hand, the direct control model requires a
smaller computational effort.
The success or failure in the practical implementation of neural controller is intimately
connected to the quality of the test vectors used in the net training. In the preliminary training we
have got a satisfactory result after 20.000 iterations using only the backpropagation algorithm and
this number was strongly reduced when genetic algorithms have been combined.
11
256
The environment was implemented on a IBM-PC 386DX with 40 MHz clock frequency.
For a neural controller with 5 neurons in the hidden layer it was required a training time of less
than 10 minutes. In the subsequent trainings, since the net was already pre-trained, this time was
reduced to some seconds, depending on the number of vectors used.
At any moment of time we have a neural net controlling the process in real time and
another being trained in parallel. Once finished the training of the parallel net (and assuming it was
successful) the weights of the controller net are quickly updated, without any interference in the
process under control. After the updating of the weights, the training process can be finished or
another training cycle can be automatically started.
5 - CONCLUSION
Surely the use of neural controllers is not restricted to generators voltage and speed
control. It is important to point out that for each application there will be a more adequate
structure, which will be identified after the realization of tests. We suggest to start with a very
simple net, which has a hidden layer with at least 3 neurons. Starting from the results obtained
with it, we can increase the complexity of the net's inner structure. It is recommendable to use
different nets specialized to each operation point or region of the process, in order to work only
with small nets and also assure fast training and adequate real time behavior.
One of the main motivations in the use of neural control resides in the fact that we can
implement very complex controllers without a deeper knowledge of specific control techniques.
Before anything else, the neural nets are universal approximators. We have to consider
that, when a classical controller is implemented in the field, it is usually done in an environment
were the received information (from transducers) already presents a certain amount of embedded
error. In the case of neural controllers, as the informations used to "design" them are obtained
directly from the process, all errors are automatically considered, including those implicit in the
process itself.
We can assert, in face to what has been said until here, that neural control represents a
good option for the most control problems. In each situation it is important to analyze the
convenience of using this technique or not. The main difficulty resides usually in the fact that,
during the training phase, the user should have an adequate methodology to obtain the test
vectors. It is just in this item that the here described environment can reduce the user effort,
providing integrated support for data acquisition and subsequent training.
6 - BD3LIOGRAPHY
[1] SILVA L. E. Borges da; TORRES G. Lambert; SATURNO, E. C. ; SILVA A. P. Alves da;
OLIVER G. - "Neural Net Adaptive Schemes for DC Motor Drives", IEEE Industry Applications
Society Conference, Toronto, October 1993.
[2] PAO, Yon-Han. "Adaptative pattern recognition and neural networks". Addison-Wesley
Publishing Company, United States of America, 1989.
12
257
[3] MASTERS, Timothy. 'Tractical neural networks recipes in C++". Academic Press, INC, San
Diego CA, 1993.
[4] McCULLOCH, W. S.; PITTS, W. H. "A logical calculus of ideas immanent in nervous
activity". Bull Math Biophys, 5:115133. 1943. Formal Neuron.
[5]KANATA, Yakichi; MAEDA, Yutaka. "Learning rule of neural networks for control ", SICE,
777 789. 1994.
[6]BOSE, Bimal K. "Expert system, fuzzy logic, and neural network applications in power
electronics and motion control". Proceedings of IEEE, vol. 82, no. 8, 1303 1323. 1994.
[7]TAKAHASHI, Hiroki; AGUI, Takeschi; NAGAHASHI, Hiroshi. "Designing adaptative neural
networks architectures and their learning". Science of Artificial Neural Networks ,
SPIE vol.
1966.208215. 1993.
[8]SHEBL, Gerald B.; MAIFELD, Timothy T. "Unit commitment by genetic algorithm and
expert system". Eletric Power Systems Research 30. 115 121. 1994.
[9]WU, Q. H ; HOGG, B. W.; IRWIN, G. W. "A neural network regulator for turbogenerators".
IEEE Transactions on Neural Networks, vol 3, no. 1, 95 100. Jan 1992.
[10] TORRES, Germano L. "Notas do curso de introduo s redes neuronais". EFEI. 1992.
[11] SEPEDA FILHO, Idmilson H.; STEMMER, Marcelo R. "Redes Neurais". Notas Internas
LCMI/UFSC. Set/1993.
[12] RUMELHART, David E.; LEHR, Michael .; WTDROW, Bernard. "Neural networks:
applications in industry, business and science". Cornminications of the ACM. Vol. 37, no. 3. 93
105. March 1994.
[13] DJUKANOVIC, M.; SOBAJIC, D. J.; PAO, Y. H. "Neural net based determination of
generatorshedding requirements in electric power systems". TEE proceedingsC, Vol. 139, No. 5,
427 436, sep/1992.
[14] CAMPAGNA, David P.; KRAFT, L. Gordon. "A comparison between CMAC neural
network control and two traditional adaptative control systems". IEEE Control Systems
Magazine. 3 6 4 3 . april/1990.
[15] ROY, Serge. "Nearoptimal dynamic learning rate for training back_propagation neural
networks". SPIE Vol. 1966 Science of Artificial Neural Networks . 277 283. 1993.
[16] JANAKJRAMAN, J.; HONAVAR V. "Adaptative learning rate for increasing learning speed
in backpropagation networks". SPIE Vol. 1966 Science of Artificial Neural Networks II. 225
235. 1993.
13
258
[17] CHANG, C. S.; SRINTVASAN, D.; LEEW, A. C. "A hibrid model for transient stability
evaluation of interconnected longitudional power systems using neural networks/pattern
recognition approach". Transactions on Power Systems. Vol. 9. No. 1, 85 - 92. Feb. 1994.
[18] MISTRY, Sanjay L; NAJR, Satish S. "Identification and control experiments using neural
designs". IEEE Control Systems. 48 -56. June/1994.
[19] VILLALOBOS, Leda; MERAT, Francis L. "Optimal learning capability assessment of
multicategory neural nets". SPIE Vol. 1966 Science of Artificial Neural Networks II. 384 - 395.
1993.
[20] ZHANG, Y.; CHEN, G. P.; MALIK, O. P.; HOPE, G. S. "An artificial neural network based
adaptative power system stabilizer". IEEE Transactions on Energy Conversion. Vol. 8. No. 1.7177. March/1993.
[21] WEERASOORIYA, S.; EL-SHARKAWI, M. A. "Laboratory implementation of neural
network trajectory controller for a dc motor". IEEE Transactions on Energy Conversion. Vol. 8.
No. 1. 107-113. March/1993.
[22] DJUKANOVIC, M.; SOBAJIC, D. J.; PAO, Y. H. "Preliminary results on neural net based
simulation of synchronous machine dynamic response". Eletric Power Systems Research, 25. 159 168. 1992.
[23] YANG, H. T.; HUANG, K. Y.; HUANG, C. L. "An artificial neural network based
identification and control approach for the field-oriented induction motor". Eletric Power Systems
Research, 30. 35 - 45. 1994.
14
259
260
In data analysis objects are considered which are described by some attributes. Objects can be
for example persons, things (machines, products, ...), time series, sensor signals, process
states, and so on. The specific values of the attributes are the data to be analysed. The overall
goal is to find structure (information) about these data. This can be achieved by classifying
the huge amount of data into relatively few classes of similar objects. This leads to a
complexity reduction in the considered application which allows for improved decisions
based on the gained information. Figure 1 shows the process of data analysis described so far
which can be separated into feature analysis, classifier design, and classification.
Process description
Features determination:
Numerical object data
Pair-Relation data
Sensors
Humans
Feature analysis
Pre-processing
Extraction
2-D display
Classifier design
Identification
Classification
Input data
Estimation
Prediction
Assessment
Output results
Control
261
The process of data analysis described so far is not necessarily connected with fuzzy
concepts. If, however, either features or classes are fuzzy the use of fuzzy approaches is
desirable. In figure 1, for example, objects, features, and classes are considered. Both,
features and classes can be represented in crisp or fuzzy terms. An object is said to be fuzzy
if at least one of its features is fuzzy. This leads to the following four cases [13]:
crisp objects and crisp classes
crisp objects and fuzzy classes
fuzzy objects and crisp classes
fuzzy objects and fuzzy classes
In chapter 3 methods and a tool are described which can be used to solve data analysis
problems falling into the latter three cases. Chapter 4 contains two industrial applications
where crisp objects and fuzzy classes are considered.
262
In the literature, a lot of different algorithmic methods for data analysis have been suggested
[5], [10]. One of the most frequently used cluster algorithms which has been applied very
extensively so far is the Fuzzy cmeans (FCM) [2]. This algorithm assigns objects, which are
described by several features, to fuzzy classes. Objects belong to these classes with different
degrees of membership. Here no explicitly formulated expert knowledge is required for the
task of data analysis.
If an expert has some knowledge about the analysis of data (as for example in the area of
diagnosis), this knowledge should be used for the evaluation. Then knowledgebased
methods for fuzzy data analysis are suitable [14]. This class is similar to the approach taken
in fuzzy control systems where fuzzy IfThen rules are formulated and a process of
fuzzyfication, inference, and defuzzyfication leads to the final decision [15]. The automatic
construction of such systems can be supported by fuzzy techniques from the area of machine
learning; see e.g. [11].
If an expert can not describe his knowledge explicitly but is able to deliver some examples
for "correct decisions" which contain the expert knowledge implicitly, a neural network can
be trained with these training examples, see e.g. [8],
3.1.3 New developments of methods for data analysis
Recently a lot of research efforts are directed towards the combination of different intelligent
techniques. Here the elaboration of neurofuzzy systems is one cornerstone for the future
development of intelligent machines, see e.g. [6]. One of these methods is a fuzzy version of
Kohonen's network [3].
It is expected that in the near future the areas of fuzzy technology, neural networks, and
genetic algorithms will be combined to a higher degree. Especially for data analysis the
combination of these methods could give promising results.
3.2 DataEngine - A Software-Tool for Data Analysis
DataEngine is a software tool that contains methods for data analysis which are described
above. Especially the combination of signal processing, statistical analysis, and intelligent
systems for classifier design and classification leads to a powerful software tool which can be
used in a very broad range of applications.
File
Serial port
Data
acquisition
boards
Data editor
INPUT
OUTPUT
* *
File
Serial port
Printer
2D
Graphics
263
graphical user interface facilitates the application of data analysis methods. In general,
applications ofthat kind are performed in the following three steps:
3.2.1 Modelling of a specific application with DataEngine
Each sub-task in an overall data analysis application is represented by a so called function
block in DataEngine (see Figure 3). Such function blocks represent software modules which
are specified by their input interfaces, output interfaces, and their function. Examples are a
certain filter method or a specific cluster algorithm. Function blocks could also be hardware
modules like neural network accelerator boards. This leads to a very high performance in
time-critical applications.
3.2.2 Classifier design (off-line data analysis)
After having modeled the application in DataEngine off-line analysis has to be performed
with given data sets to design the classifier. This task is done without process integration.
Classification
Once the classifier design is finished, the classification of new objects can be executed.
Depending on the specific requirements this step can be performed in an on-line or off-line
mode. If data analysis is used for decision support (e.g. in diagnosis or evaluation tasks)
objects are classified off-line. Data analysis could also be applied to process monitoring and
other problems where on-line classification is crucial. In such cases, direct process
integration is possible by configuration of function blocks for hardware interfaces.
264
steel and hardnessbased temperature determination for lifetime prediction are the two
examples. There is to note that the models reported were calculated with a pure databased
procedure, exploiting the possibilities of the advanced clustering methods available in
DataEngine.
4.1 Material Low Cycle Fatigue behaviour modelling
Data from a material properties database about a lCrMoV rotor steel have been extracted for
the analysis [16]. The first step performed was to see if it was possible to reconstruct the
usual LCF curves using only numerical methods. These curves are characterised from two
different regions: the first with higher values of strain, the second with lower values of strain
range. Variations of the strain range in the first region are less likely to affect the number of
life cycles in a strong way. This effect is more evident in the second one.
A possible approach is to adopt a clustering method to find out the regions and to determine
eventual spurious measurement, that do not clearly belong to any of the clusters. This type of
evaluation can indicate the presence of noisy points and whether their number and
characteristics justify an additional investigation.
Graphic - T:\BE5245\FELL0WSH.IP\W0RKPLAN.2\CFATDATA\LCF.MES
lCrMoV rotor steel
4
3.5
3
2.5
Strain range (%)
@ E
1.5 4
ia
1
0.5 4
%
3
0
10
100
1000
10000
1D00D
Endurance (cycles)
uiititjuie.
265
2.5
Strain range (%)
M austeri
Cluster2
1.5
1
0.5
D
^ ^3|
100
10
1000
F"
10000
100000
Endurance (cycles)
Select .
......Jr...........irn
Configure;;..; j
elosi!
W ^ J / ^ / ^ . / ^ ^ ^
266
clusters can be made, that means considering only the data points belonging to the cluster
prototypes with a membership higher than a fixed threshold (alpha value). In this way the
local regression analysis is performed only over the data points with a high similarity to the
model.
Strain range (%)
10,
first duster
. lorenzianfit
. 95% confidence
0.1.
- i
-0
11
I I I i|
- 1 1 1 ) 1 1
100
II
1000
I I I I I l|
10000
second cluster
lorenzianfit
95% confidence
1I I I I M I
100000
Endurance (cycles)
indicated with P. The expression of the SherbyDorn parameter is = logr , where / is the
267
time in hours and Tis the temperature in Kelvin. The two derived expressions will be used in
the paper:
C
log t
(i)
t = i(r TJ
(2)
Hardness measures are used to estimate the temperature, and from temperature the remaining
lifetime.
4.2.1 21/CrlMo steel analysis
The material under consideration had the following composition:
C
Si
S
Ni
Cr Mo V
W
0.14 0.14 0.011 0.012 0.5 Trace 2.56 1.04 0.04 < 0.05
The data, coming from hardness measures after different time slots at fixed temperatures
(varying from 550 to 750 degrees) are reported in Figure 8. For this 2%CrlMo steel,
comparisons are made with an approximation proposed in the European SPRINT 249 project
guideline [20]. The set of data has been processed using a fuzzy Cmeans algorithm. Two
different tests have been performed: one assuming the presence of three clusters and the
second assuming the presence of four clusters. The number of clusters is assumed taking into
consideration the possible material behaviour and through an evaluation of the results
obtained from the numerical procedure. The detected regions have been approximated using
local regression models. These models were than fused together to reach a unified model for
comparison purposes. The best prediction results have been obtained using the four cluster
subdivision. The results related to four clusters are shown in Figure 9, where the obtained
global model is reported together with the plot of the equation suggested from SP249, an
exponential function built up starting from mechanistic assumptions and constraints (like the
two asymptotes for H values of 180 and 115).
A B O
c6
S7-
mmmw.
11111:
13a
12a
17.5
-17.0
-16.5
-16.0
-15.5
-15.0
-14.5
-14.0
-13.5
268
In this case the material behaviour is more regular and this is reflected in the success of
different methods of approximation. Nonetheless the question of the underestimation of
temperature starting from hardness values remains open: the methods illustrated for this kind
of steel (including the SP249 guideline) have not a coherent conservative response, they can
lead to more conservative or less conservative temperature estimations.
An underestimation can be responsible for dangerous nonconservative evaluations of the
remaining lifetime (equation 2).
This problem comes from the bestfit approach used. Different approximations of the
material behaviour should be adopted to take into account the use that will be made of the
models. Lower bound approximations or interval analysisbased regression models have to be
adopted in this case.
180-
SteeIE(2%Cr1Mo)
170-
cluster 1
cluster 2
* cluster 3
cluster 4
160-
150-
regression model
idei in e
140-
130-
'""it
1
120-17
-16
-15
-14
13
5. Conclusions
Data analysis has large potentials for industrial applications. It can lead to the automation of
tasks which are too complex or too illdefined to be solved satisfying with conventional
10
269
techniques. This can result in the reduction of cost, time, and energy which also improves
environmental criteria.
In contrast to fuzzy controllers where the behaviour of the controlled system can be observed
and therefore the performance of the controller can be stated immediatly, many applications
of methods for data analysis have in common that it will take some time to exactly quantify
their influences.
The applications reported show how the cited methods can be successfully introduced in the
field of material properties analysis. At MPA Stuttgart a research effort is currently under
way to exploit the possibilities of advanced data analysis.
The authors believe that the link of the software package with the available material
databases can bring some new insight in many difficult material analysis problems.
6. References
[1] H. Bandemer, W. Nther, Fuzzy Data Analysis (Kluwer, Dordrecht, 1992).
[2] J.C. Bezdek, Pattern Recognition with Fuzzy Objective Function Algorithms (Plenum
Press, New York, 1981).
[3] J.C. Bezdek, E. C.-K. Tsao, N.R. Pal, Fuzzy Kohonen Clustering Networks, in: IEEE
International Conference on Fuzzy Systems (San Diego, 1992) 1035-1043.
[4] Bezdek J.C, Pal S. Eds (1992) Fuzzy models for Pattern Recognition, IEEE Press
[5] A. Kandel, Fuzzy Techniques in Pattern Recognition. (John Wiley & Sons, New
York, 1982).
[6] B. Kosko, Neural Networks and Fuzzy Systems. (Prentice-Hall, Englewood Cliffs,
1992).
[7] R. Krishnapuram, J. Lee, Fuzzy-Set-Based Hierarchical Networks for Information
Fusion in Computer Vision. Neural Networks 5 (1992) 335-350.
[8] Y.-H. Pao, Adaptive Pattern Recognition and Neural Networks. (Addison-Wesley,
Reading, Mass., 1989).
[9] R. Schalkoff, Pattern Recognition Statistical, Structural and Neural Approaches.
(John Wiley & Sons, New York, 1992).
[10] J. Watada, Methods for Fuzzy Classification. Japanese Journal of Fuzzy Theory and
Systems 4 (1992) 149-163.
[11]R. Weber, Fuzzy-TD3: A Class of Methods for Automatic Knowledge Acquisition.
Proceedings of the 2nd International Conference on Fuzzy Logic & Neural Networks
(Iizuka, Japan, July 1992) 265-268.
[12] S.M. Weiss, CA. Kulikowski, Computer Systems that learn. (Morgan Kaufmann, San
Mateo, 1991).
[13] H.-J. Zimmermann, Fuzzy Sets in Pattern Recognition, in: P.A. Devijer, J. Kittler,
Eds., Pattern Recognition Theory and Applications (Springer-Verlag, Berlin, 1987) 383391.
[14] H.-J. Zimmermann, Fuzzy Sets, Decision Making, and Expert Systems. (Kluwer,
Boston, 1987).
[15] Zimmermann H.-J. (1991) Fuzzy Set Theory and Its Applications (2nd Edition),
Kluwer Academic Publishers, Boston, Dordrecht
[16] Holdsworth S.R. (1994) BRITE-EURAM C-FAT Project BE 5245: KBS-aided
Prediction of Crack Initiation and Early Crack Growth Behaviour Under Complex Creep11
270
12
271
ABSTRACT
In this article a procedure for obtaining part families using fuzzy logic is discribed. It permits
taking into account uncertainties and ambiguities usually present in manufacturing. Aspects of
the data base are presented, which can aggregate design and manufacture information. How to
assign membership to the features that will be analysed, the similarity analysis to the
resemblance relation and part processing information are also presented. This procedure
makes it possible the elaboration of one software which will be a interesting tool to the
manufacture of small lots, it integrates the informations of design and manufacture and makes
possible a rationalization of resources. Finally the article discribes the use of fuzzy backward
reasoning for the classification of new parts in established families. This approach is a
interesting application for group technology.
KEYWORDS
Group Technology, part family, fuzzy logic, membership attribution, fuzzy backward
reasoning
1 - INTRODUCTION
Group Technology (GT) is a philosophy which tries to analyze and arrange the parts
and manufacturing processes, by agreement with the projects and manufacturing similarities
[2] [5] [6]. Then, families are established to make it possible to rationalize the manufacturing
processes, or to reduce the number of drawings in the design department.
Most papers about part family formation assume that information about cost and
processing time, demand of part, etc.. are accurate. It is usually supposed that a part only
belongs to one family. Nevertheless, in many cases it does not occur. The analysis of grouping,
making use of fuzzy logic, can provide a solution to this problem. However, few articles have
been published dealing with the problem of uncertainty, the formation of manufacturing cells
and the part families. Likewise, those articles consider such questions isolately neglecting the
development of methods to be shared by all company users [7].
01
272
The part similarities, which is the basic aspect for family formation, consist of a close
classification in geometry, function, material and /or process. It may be not sufficient to
describe part features using yes or no labels, when accurate classification is required [7]. In
order to obtain an efficient and flexible classification which considers uncertainties, thus
eliminating the shortcoming of the currently employed methods, this article describes a
procedure that makes use of fuzzy logic for the part family formation. Fuzzy membership
function permits taking into account the inherent uncertainties into part features description.
Thus producing more realistic results. The use of this technique, will make the part family
formation more sensible. The membership value, which lies between 0 and 1 can express what
extension of the feature the part has. The closer the value is to 1, more quantity of feature the
part has.
First are described details of data base. The grouping principle employed is also
described which consists of choosing one threshold value to the similarity. Once this threshold
value is chosen, two elements will be in the same group if, for example, in case of similarity
function, the similarity between them is greater than the threshold value. Since the similarity
relationship is not necessarily transitive, it is necessary to employ the fuzzy matrix theory to
form the closest structure which permits the separating data in exclusive and separated groups
which are, in essence, an equivalent class over a certain threshold value.
For process similarity a procedure is shown to search information of similarities that
should guide the formation of manufacturing cells.
Another important aspect described is the possibility to use qualitative data, such as,
complex, easy, hard, high surface roughness, etc.... How to translate this information into
numerical values, which is essential to similarity analysis, is also shown.
Finally, the autors belive that the use of backward reasoning makes it possible to
classify a new part in a established family in a faster and easier way. It is possible because will
be necessary to answer a less number of questions without the rigidity that is frequent in the
common methods.
The object of this methodology, is to develop an alternative procedure to traditional
methods for obtaining similarity. To this purpose, it is necessary to integrate apropriate
approachs that can incorporate the uncertainty which nowadays serve isolated aspects of
similarity analysis.
02
273
FEATURES
Holes
Pockets
Slots
thread roles
basic shape
Technology
Material
Complexity shape
Production
Annual production
Process
DETAILS
max. diameter
min. diameter
number of holes
max lenght
min. lenght
max. width
min. width
max. depth
min. depth
number of pockets
max lenght
min. lenght
number of slots
max diameter
min. diameter
max pitch
min pitch
number of roles
(A) total length
(B) max width
(C) max depth
min. tolerance
complexity
stength
hardness
optimum cutting speed
number of diferents planes
number of rotational elements
number of gears
max number for production
min number for production
max number
min number
number of lathes used
number of drilling used
number of milling used
QUANTITATIVE AND
QUALITATIVE ATTRIBUTES
NUMBERS OR SUBJECTIVE
VALUES OF THE
PARTS
03
274
like "wide, medium, small". An example of such feature can be the shape complexity of a
part, an analyst may define as very complex, complex or not complex.
PRODUCTION PLANNING
DESIGN
MANUFACTURE
*~ t- - i
METROLOGY
PROCESS PLANNING
MANAGEMENT
'h
ira
X =
v
nl
'nm
04
275
the sample will have their values of membership calculated in the same form.
Confinnralion
x
X
l.j
u;
j M AX
jr
-v
ij
JMIN
X
X
3M AX
JMIN
x
ij ~
i
MAX
JMIN
Select a Graphic
Graphic Number.
LENGTH
40
80
120
1E0
2O0
MAX LENGTH
<.
fnrert
; Hext , > ;
ievWA.
fite'
05
276
complex, little complex, high roughness, etc. Saaty proposed that to provide attribute
comparison should be used values from the finite set: {1/9,1/8,...,1,2...,8,9}. These matrices
are calculated by the evaluation of the importance of an attribute over the other, through of
the following scale:
1.
2.
3.
4.
5.
6.
7.
8.
9.
Each entry of the matrix is a pair of judgment. After the matrix of comparison is
defined, the eigenvalue ( ^ ) and its respective eigenvector will be calculated. The
eigenvector will represent the memberships that can be used for the attributes in question and
the eigenvalue is the measure or rate of consistency of the result.
To illustrate this method, one of the features which may be important for the obtaining
similarity is the complexity of shape evaluated by an analyst. The attributes of this feature
originating from the data base range from very complex, to complex, mean complexity, low
complexidade and very low complexity. All the parts, of the data base, have one of these
qualitative values for its feature of complexity. By means of the scale of priority shown, a
specialist will provide the matrix A of Figure 6.
277
this case, the memberships (one of eigenvectors normalized for the greatest weight to be equal
1), after the calculation, will be give by the first normalized eigenvector in Figure 7.
EC
Shape Complexity
EI6EKVECTOR
3.936
2.036
1
.431
.254
FIRST NORMALIZED
EI6ENVECTDR
1
.517
.254
.125
.065
INTERMEDIATE VALUES
MAX EIGENVALUE- 5.243
CI-
.061
CR .054
(SECOND NORMALIZED
EIGENVECTOR
MEMBERSHIP ATTRIBUTION
Vciy Compiei - 1
Coepfei - .52
Meao Complexity - .25
Lon Complexity -- . 1 2
Very Low Conpteuty - .06
.51
.264
.13
.064
.033
END
S(xj,Xj) = - p
(1)
(2)
07
278
W
k=l
S(xi,x.)
"ik>*M x V
(3)
k51^k(x"ik)2)*(1^k(x"jk)2))/
x
x
^
(
"ik)M
V
k
k=l
S(xi,Xj) = l f
k tr^
(4)
M V>M jk
S(x i ,x j ) = i
x
k=l ^ k ^ i k ) ^ ( " j k )
(5)
l^ik^k^jk
The symmetrie matrix can be used directly in the analysis of fuzzy grouping. The
similarity of parts consist of a very close classification in geometry, function, material and /or
process.
The measurements of similarity usually has a minimum variance, and they usually give
the same results if the grouping are compact and well separated. Nevertheless, if the grouping
are near one other, really diffrents results can be calculated [7]. With the similarities it is
possible to obtain the matrix of Figure 8, from matrix of Figure 3.
>11
in
S=
J
>nl
nn
RoR
( r ij A r jk)
ik = mi
(6)
(7)
The transitive matrix of Figure 8 is the matrix FUZZY equivalent which can be simply
calculated by (8) [3].
03
279
(8)
R = Rv_;R2u...vjRn
Finally, given one oc level, the groupings of similar parts are obtained for the level
chosen. With different c values, different classifications will appear. The greater is the cc
value, less parts will be classified in each family, thus more families will be formed.
An example of decomposition, for obtaining the families can be better understood
through Figures 9 and 10. Figure 9 shows the attribution of some oc values and Figure 10
shows, for each one of the (oc) levels, the groupings formed. For example, for oc = 0,9, there
are three groupings, the first one consisting of parts A, D and E, the second one, of part and
the third one, of part C.
Similarity Relationship
ALFAOJT 0.6
Part 1 1 1 1 1
Part 2 1 1 1 1
Pat
3 1 1 1 1
Pat 4 1 1 1 1
Part
5 1 1 1 1
ALFACUT 0.7
1
1
1
Part 1 1 1 1
Part 2 1 1
Pat
3 1 1 1
Part 4 1 1 1
Part
5 1 1 1
1
1
1
1
1
ALFACUT 0.8
1
1
1
1
1
Part 1 1 1 0 1
Part M 1 0 1
Part
3 0 0 1 0
Pat 4 1 1 0 1
Part 5 1 1 0 1
1
1
0
1
1
+U- -: ;:.:=
+.U
: : .
1
2
3
4
5
1
0
0
1
1
0
1
0
0
0
0
0
1
0
0
1
0
0
1
1
..= ,
::,,)*
ALFACUT 1
ALFA-CUT - 0.3
Pat
Part
Part
Pat
Pat
1.3
\.T.
Pat 1 1 0 0
Pat 2 0 1 0
Pat 3 0 0 1
Pat
4 1 0 0
Part
5 0 0 0
1
0
0
1
1
1
0
0
1
0
0
0
0
0
1
JamiEe
LE
Figure 9 Decomposition of a similarity relationship.
Parti
I
Parti
Part 4
Part 5
Part 2
Part 4
Part 2
'r
Part 3
acut = 0.9
Part 3
acut 0.8
Parti
Part5
\'
*T
09
280
X.
BINARY MATRIX
Y
'11
'21
i31
u32
mL u ml
X.
J
12
22
13
'23
I33
u
m2
m3
u In
u
u
2n
3n
(9)
mn
in (9):
Xjis apart andj = 1, 2, 3,...., n;
Yi is a machine and i = 1,2, 3,... m;
Ujj represents the relationship between part j and machine i (uj = 0 ou 1).
For example, un = 1 shows that the part 2 visit machine 1. Due to the inflexibility of
this matrix, which does not show the possibility of another machine also making part 2,
another matrix should be developed, which should be called of nonbinary, represented by (10).
NONBTNARY MATRIX
Y2
U2i
Ya
Y.
X3
x2
Xi
X.
12
U,3
U3l
u22
u32
u23
u33
"
-"ml
um2
u^
" u.
ll
'In
'2n
(10)
'3n
in (10):
Xjis apart andj = 1, 2, 3,...., n;
Y is a machine and i = 1, 2, 3,..., m;
Ujj represents the relationship between part j and machine
In (10) it is possible to observe the following properties:
0<uj< 1 fori= 1,2,3, ...,m;j = 1,2,3, ...,
JV>0fori=l,2,3,
H
(H)
(12)
10
281
The property defined by (11) indicates the intensity with which a machine is designated
to process a determined part, a number near of 1 meaning a great potentiality to process the
part, while with a number near 0, the machine would definitely not be appropriate.
The elements of matrix (10) are calculated from mixed functions between machines and
components, which is an interesting proposition from [12]. The following steps are necessary
to obtain these values :
1. To define the membership functions for each pair of feature machine;
2. To compute the degree of membership for each pair of feature machine;
3. To compute the combined index for each pair of machine part, because usually a part has
more than a feature processed by the same machine. This index, called combined, goes to
the nonbinary matrix.
To illustrate a membership function for one pair of feature machine, it is possible to
think about the tolerances a certain machine can obtain. This function may be represented by
Figure 11.
t2
tl
tolerance ()
The memberships values from Figure 11 will be in the range designated by (13).
()
0
1
t3x
t3t2
0
x < tl
tl<x<t2
(13)
t2 < < t3
x > t3
282
M X J ) %l<k<p
MX*)
v
With this procedure it is easy to establish the membership for all the pairs of machine
part, and to construct the nonbinary matrix.
It is important to observe that the binary matrix used in most methods has a different
interpretation from nonbinary. In the former, the entries represent the relationship of incidence
between a part and a machine. The relationship of correspondence should remain in the
resultant matrix. For example, it should be assured that all machines should be in one group,
where the parts have entry 1 with those machines. On the other hand, any entry will be an
exceptional element. In the nonbinary matrix, the entries represent the degree with which a
component can be processed in a machine. It is not necessary to assure that all the nonzero
parts are in groups, as long as alternative machines are available. If the necessary machines are
grouped in the cell for some components, then the outside elements for these parts become
exceptional elements.
To illustrate this procedure, for a small example, if the pertinence functions are those
of Figure 12, for a hypothetic case of machine 1 of m available, for each one of the 7 parts
that are to be grouped, with the value (from a data base) for the feature in question, it is
possible to do a chart of Figure 13.
ud
uixi
Figure 12 Example of membership function for feature that should be analyzed relative to
machine 1
Parts
1
2
3
4
5
6
7
finishing tolerance
1
1
1
0.8
1
0.1
0.9
machine capacity
1
0
1
1
0
1
1
Figure 13 Memberships given to each pair part feature for the machine 1
12
283
In the same way that the values of Figure 13 are given, the process should be repeated
for all the machines that will be analyzed, thus m matrices analogous to this is obtained. For
the application of the formula (14) for the matrix of Figure 13, the vector (15) is calculated,
which expresses the membership for the 7 parts of machine 1.
machinel[l 0 1 0.8 0 0.1 0.9]
(15)
If the process for the m machine is repeated, m vectors such as (15) will be calculated,
which will consist of the nonbinary matrix that should be studied for obtaining the similarities
of the process. If for the 7 parts example a universe of 7 machines is available, after the
execution of the procedure, a nonbinary matrix as (16) may be obtained. The matrix (16) is the
one that should be analyzed to have a solution of similarity of process.
M, " 1
M2
M
M4
M,
M;
2
0
0.8
0
0
0.1
0.9
0.8
0.8
0.8
0
0
0
0.7
1 0.3
0
0.3 0.7 0
0 0.7 0
0.6 0.1 0.5
0.3
0
0.7
0
0
0
0.3
0.5
0.8
0
0.3
0.8
0
0.9
0.7
0
(16)
Obtained the nonbinary matrix, it is now necessary to run a proper grouping algorithm
to get the possible manufacturing cells. [12] shows the use of the Rank Order Clustering
(ROC) [4] to analyze matrix (16). The result for this matrix is:
family 2 > P2 P5
family 1
0 O.l"
l 0
< M 2 1 0.7 0.8
0
0
0.3 0
M3
< M 3 0.7 0.8 0.8
0
0 0.3 0
0 0.3 0.3 cell 2 ' ^ M 4 0.7 0.7 0
'
M4. 0
0.5 0
0
0.5
0.6
M 5 0.1 0
<M5
0.8 0.7 0.7 0.8
M
^M6
6 0.2 0.3 0
0
0_
0.5 0
.< M 7 1 0.8 0.9_
M7
'<_Mi
M2
celli
0.9 0.8"
P6
Cell 1 composed of machines Mi, M and Ms and family 1 of part P 3 , Pi, P7 and P4;
Cell 2 composed of machines M7, M2, M3 and M4 and family 2 of part P 2 , P5 and ;
The importance of utilizing the nonbinary matrix is that, after the grouping of machines
is obtained, there is now the possibility of analyzing the machines that are more appropriate to
process the part families. Furthermore, it makes sense to eliminate the machines that process
similar operations. This characteristic is not possible if the binary matrix is used.
Other algorithms for the grouping can be adapted so that the nonbinary matrix will be
used for the cell formation.
13
284
After to obtain the families, new parts can be introduced. To classify these new parts
always is a problem. To use the fuzzy backward reasoning it is a new way for solution of this
problem. Let the system be decribed by:
1. F = {Fl, F2,..., Fn} - the space of parts;
2. C = {Cl, C2, ...,Cm} - important technological features of parts;
R: F -> C
()
In (17):
R = tj] i - l - >
features.
It is possible to infer, for the news parts, and making use of the features fuzzy weight,
which family (or families) is (are) the best suited for the new part. The expression for purpose
inference is (18).
C=RoF
(18)
Existing parts available were divided and classified into families as shown in Figure 14.
With families and features it is possible to obtain the Fuzzy relation matriz R, shown in
Figure 15.
14
285
PARTS FAMILIES
F
^-y
Fl
F2
F3
F4
F5
Cl
0.9
0.7
0
0
0
C2
0
0
0
0,9
0,9
C3
0.7
0,6
0,2
0
0,8
C4
0
0
0,7
0
0,6
CS
0
0
0
0.9
0
C6
0
0,2
0.9
0
0
C7
0
0,9
0
0,1
0
15
286
NEW PART
0
0"
0,9 0,7 0
0 0,9 0,9
"af
0 0
a2
0,7 0,6 0,2 0 0,8
0 0 0,7 0 0,6 o a3
0,9
0,2 0,9 0
0,9 0 0,1
0
0
0 j
a4
_a5_
0 = (0Aal)v(0Aa2)v(0Aa3)v(0,9Aa4)v(0Aa5)
(19)
(20)
(21)
(22)
(23)
16
287
O = (0Aal)v(0,2Aa2)v(0,9Aa3)v(0Aa4)v(0Aa5)
O = (0Aal)v(0,9Aa2)v(0Aa3)v(0,lAa4)v(0Aa5)
(24)
(25)
(OAal) < 0
Val
(0Aa2) < 0
Va2
(0Aa3) < 0
Va3
(0,9Aa4) < 0
=> s4 = 0
( 0 A S 4 ) < 0,7
(0,9Aa5) < 0
=> s5 = 0
al
'
Va4
(OAal) < 0 V a l
(0Aa2) < 0 V a2
(0A35)
< 0 Va5
Val
Va3
Va5
17
288
0,9
0,7
0,9
0,9
al
0,8
0,7
0,6
0,2
0,8
a2
0
0
0
0
0,2
0,9
. 0
0,9
0,1
0,6
0
0
0_
0,7 0
0 0,9
0,6
0
a3
a4
_a5
0_
(26)
(27)
(28)
(29)
(30)
(31)
(32)
/4*T
y^y
y^k
Hy ,y' ^y W
y
; 1
y^
18
289
In the same way like for the rotational part we may conclude that:
al = 0; a2 = 0; s3 = 0; s4 = 0;a5 > 0,9
The conclusion is that the new prismatic part belongs to family 5.
7 - CONCLUSIONS
This paper is the synthesis of what is possible to do with fuzzy logic in order to deal
with the problem of obtaining similarities, an important aspect for part families formation,
considering the uncertainty present in the manufacturing environment. This procedure groups
techniques that can cope with the problem of similarities isolately, and more comprehensively,
making use of the same data base, a fact that usually is necessary, although it is not possible in
the most currents tools.
With the procedure described here it is possible to provide a new contribution for the
Group Technology. The development of this model can also supply a solution for the problem
of setup time decrease, once it is possible to retrieve information of process and geometry
similarities together. In this way, is possible to profit by these similarities so that the
preparation time of machines will be shorter. It is possible to observe then that with the
rationalization, which is possible with the identification of similarities, work is being done to
save the company resources.
The autors belive that the use of backward reasoning makes it possible to classify a
new part in a established family in a faster and easier way. It is possible because will be
necessary to answer a less number of questions without the rigidity that is frequent in the
Classification and Coding Systems (CCS), a common method used to obtain similarities. The
paper shows the appreciation of simple cases for the classification of rotational and prismatic
parts into established families. For the classification, the backward reasoning is used, thus
simphrying the interaction between modeling and classification procedures. After the
examples, it is possible to conclude, that the use of fuzzy backward reasoning makes it
possible to classify a new part in established family in a faster and easier way. However, the
solution for the best family is not always simple. In some cases is not possible to obtain a
solution. For this reason it is necessary more research in this field.
Finally, this methodology may be transformed into a software, which will be an
interesting tool to the manufacture small lots. This consists of a different proposition as an
alternative to current methods available.
8 - REFERENCES
[1] Arieh, D.B.; Triantaphyllou, E. Quantifying data for group technology with weighted
fuzzy features. Int. J. Prod. Res, 30, 1285 - 1299, 1992.
[2] Hyer, N ; Wemmerlov, U. Group Technology and Productivity. Capabilities of Group
Technology. Michigan, The Computer and Automated Systems Association of SME, 312, 1987.
19
290
20
291
CHAPTER 5
293
David C. Jorge*,B.Sc.,MLEEE
294
P 1
Wl
P2^ ^2
P3
: ~
Pri
/wn
(A)
(B)
Figure 1 - ANN diagrams
(A) - Perceptron representation
(B) - ANN multi-layer scheme
Figure 1(a) shows a simple model of a neuron characterized by a number of inputs
the weights Wi,W2,...,Wn, the bias adjust b and an output a.
The neuron uses the input, together with information on its current activation state to
determine the output a, given as in equation (1).
PI,P 2 ,...,PN,
a = YWkPk+b
(1)
*=1
The ANN models may be "trained" to work properly. The desired response is a special
input signal used to train the neuron. A special algorithm adjusts weights so that the output
response to the input patterns will be as close as possible to their respective desired responses. In
other words, the ANN must have a mechanism for learning. Learning alters the weights associated
with the various interconnections and thus leads to a modification in the strength of the
interconnections.
295
In order to use the ANN properly, it is necessary to know that empirical methods are the
only way to find satisfactory results. The network scheme will have direct influence on the ANN
performance. Problems may also arise from the ANN training. Depending on some factors, ANN
may not converge and it could be necessary to change the training parameters. The sequence of
the input data training, the initial weights used and the number of cases for the training data may
affect the results.
The use of ANN in distance relays may result in a considerable advance for the correct
diagnosis of operation. The ANN may solve the overreach and the underreach problems which are
very common in the power plant protection project. ANN can be trained with data provided from
a simulation of a faulted transmission line and 'learn" the aspects related to that situation. The use
of ANN make it possible to protect over 80% of the extension of the power system line. ANN can
deal with unforeseen situations related to faults in the power plant.
3-Backpropagation Method
The Backpropagation algorithm is central to much current work on learning in neural
networks. It was invented independently several times, by Bryson and Ho (1969), Werbos (1974),
Parker (1985) and Rumelhart, Hinton, and Willians (1986). A closely related approach was
proposed by Le Chun (1985). The Backpropagation method works very well adjusting the
weights (Wjn) which are connected in successive layers of multilayer perceptrons. The algorithm
gives a prescription for changing the weights in any feedforward network to learn a training set
of inputoutput pairs {Pn,ar}[6]. The use of the bias adjust in the ANN is optional, but the results
may be enhanced by it. Trained backpropagation networks tend to give reasonable answers when
presented with inputs that they have never seen. An elementary backpropagation neuron with R
inputs is show below on Figure 2.
PHWii
Output
w1>V
P[2J' .
1*
P[Rf
bias adjust
Input
data
n summation output
weights
transfer function
a=logj/g(,b) =
1
1+e
- ( " * * )
R
number ofinputs
b
bias adjust
a = F[w.P,b]
(2)
Backpropagation networks often use the logistic sigmoid as the activation transfer function. The
logistic sigmoid transfer function maps the neuron input from the interval (oo,+oo) into the
296
interval (0,+l). The logistic sigmoid equation (2) is applied to each element of the proposed ANN
[7]
4-Application of the Backpropagation Method for the Fault Location Problem
It is common, among the algorithms for digital distance protection, to use voltage and
current waveforms taken from a busbar in order to solve the fault location problem in a power
plant.
Figure 3 shows the ANN diagram chosen to solve the fault location problem using the
backpropagation method. This scheme also uses the three phase values of current and voltage
data. The Discrete Fourier Transform was used to filter this input data and extract the
fundamental components. The transfer function used for the perceptrons was the logistic sigmoid
described in the earlier section.
OUTPUT
[trip/no trip]
297
transmission line as the main variation of the input data. However, flexibility for untrained or
unforeseen data is expected for this kind of scheme.
Figure 5 shows the schematic diagram for the hardware needed in an ANN
implementation, including the microprocessor based neural relay. The converged set of weights,
which are worked in an off-line mode are then stored in the microprocessor for on-line
application. The scheme works in a sample frequency of 4kHz.
100 Km
rfault point
0f
20GVA
V=ll0pu
Rf=10n
5 G VA
V=1|0pu
Q>
line switch
r>ry~\
current
signal
voltage
signal
Analog
VJ input
signals
Surge
Filter
S/H
Circuit
(clock)
Off-line
routine
On-line
hardware
process
A/D
Converter
ANN
training
routine
D/A
Converter
weights
trained
off-line
Fourier
-*| Transform
Filter
logsig
transfer
function
Digital
Output
0/1
298
chosen as the extension of the first zone of the relay. Points next to the region where trip/no trip
condition exchanges (80Km for the line used) had special treatment. In this case, less degree of
scarcity was taken between locations used for training.
Table 1 shows the results of an ANN model used as a distance relay. The ANN answer is
shown, compared to the expected ones, for faults along the transmission line. It should be
mentioned that the cases used for the tests are different from the ones used for the training
The results presented in Table 1 show the efficiency of the proposed scheme. For all the
cases, the ANN scheme correctly classifies the fault as been internal or external to the first zone of
the relay.
Distance of
the fault from
point A (Km)
2.0
4.0
6.0
8.0
10.0
11.0
13.0
16.0
20.0
22.0
26.0
29.0
30.0
33.0
35.0
37.0
40.0
42.0
45.0
46.0
48.0
52.0
54.0
55.0
ANN answer
ANN answer
Distance of
the fault from
point A (Km)
56.0
60.0
61.0
64.0
65.0
68.0
71.0
73.0
74.0
75.0
76.0
0.9998
77.0
0.9941
78.0
82.0
2.6845e"4
1.0104e"5
83.0
84.0
4.2734e"7
86.0
1.1656e"9
6.6594e"11
87.0
88.0
3.9977e"12
2.7110e"1J
89.0
92.0
2.0983e"16
2.8801e"18
94.0
96.0
5.3289e"20
1.5577e"21
98.0
fable 1-Results 1or the ANN sch eme.
Correct
answer
Correct
answer
0
0
0
0
0
0
0
0
0
0
0
299
variations. It could be noted that for most cases the ANN scheme still gives correct results,
confirming its capability as a pattern classifier. The wrong diagnosis was presented in the case of
small fault resistance where, as a consequence the current of a faulted phase increased. The wrong
diagnosis was given because this case is similar to the situation of the fault occurring in the first
zone of the relay trained earlier. However, it should be mentioned that such cases could be used in
the training set in order to avoid such a problem.
Change of trained parameters
ANN Output
1
Fault inception angle set to 88, fault distance=75Km from A.
1.5024.10"'
Fault inception angle set to 88, fault distance=85Km from A.
1
Fault inception angle set to 92, fault distance=70Km from A.
3.3442.10"y
Fault inception angle set to 92, fault distance=85Km from A.
1
Fault resistance set to 0, fault distance=70Km from A.
1 (wrong)
Fault resistance set to 0, fault distance=90Km from A.
1 (wrong)
Fault resistance set to 5, fault distance=85Km from A.
1
Fault resistance set to 5, fault distance=70Km from A
1
Fault resistance set to 8, fault distance=70Km from A.
2.0047.10"15
Fault resistance set to 8, fault distance=95Km from A.
1
Fault resistance set to 12, fault distance=70Km from A.
5.8528.10"1K
Fault resistance set to 12, fault distance=90Km from A
0.9192
Fault resistance set to 15, fault distance=70Km from A
1.1459.10"21
Fault resistance set to 15, fault distance^OKm from A
Source at set to 4.5GVA, fault distance=75Km from A.
1
2.3569.10"13
Source at set to 4.5GVA fault distance=90Km from A.
1
Source at set to 4GVA, fault distance=70Km from A
Source at set to 4GVA, fault distance=90Km from A.
3.5749.10"12
1
Source at A set to 18GVA, fault distance=75Km from A.
7.1024.10"15
Source at A set to 18GVA, fault distance=90Rm from A.
Table 2-Results of the ANN for unforeseen data.
Correct
Output
1
0
1
0
1
0
0
1
1
0
1
0
1
0
1
0
1
0
1
0
8-Conclusion
In this paper the use of ANN as a pattern classifier to work as a distance relay was
investigated. The results obtained in this scheme are very encouraging. The ANN scheme can
operate correctly in the location of the fault point. The scheme can be extended including some
more variations of parameters in the training set in order to avoid misoperation as seen in the
paper for the case of low fault resistance. It is also necessary to point out some problems related
to the ANN application. The initial network configuration is totally empirical and may not result
in the best performance for the scheme. The training points to be used can also be an expressive
problem. These are some points that can influence the speed of the conversion of weights and
consequently the performance of the scheme.
However, this tool opens a new dimension in relay philosophy which should be widely
investigated in order to solve some of the various problems related to the distance protection of
transmission lines.
300
References
[1] S.A Khapared, P.B. Kale and S.H. Agarwal, "Application of Artificial Neural Network in
Protective Relaying of Transmission Lines", IEEE. 1991.
[2] H Kanon, M. Kaneta and K Kanemaru, 'Tault location for transmission lines using inference
model Neural Network", Electrical Engineering in Japan. VoL 111, No. 7, 1991.
[3] K S . Swamp and H S . Chandrasekharaiah, 'Tault Detection and Diagnosis of Power Systems
Using Artificial Neural Networks", . 1991.
[4] M. A El-Sharkawi, R J. Marks and S. Weerasooriya, "Neural Networks and Their
Application to Power Engineering", Control and Dynamics Systems Vol. 41, pp.359-451, 1991.
[5] R AggarwaL Artificial Neural Networks for Power Systems, shortcourse notes.
[6] J. Hertz, A Krogh and R G. Palmer, Introduction to the theory of Neural Computation,
Adison-Wesley Pubhshing Co., 1991.
[7] H. Demuth and M. Beale, "Neural Network - Toolbox -For Use with Matlab". 1992.
[8] A T. Johns, R Aggarwal, " Digital Simulation of Faulted EHV Transmission Lines with
Particular Reference to Very High Speed Protection" , TEE proceeding. VoL 123, pp. 353-359,
April 1976
301
302
upstream of the right turbine side. The failed pipe section was cut out and submitted for further
examinations. The examination procedure was established by a Working Group three days
later.
In order to clarify the failure cause, extensive surface microstructure examinations were conducted and non-destructive testing of the failed component performed including other relevant
pipe system components.
All preliminary examination results so far indicate that the failed pipe is a unique event with
respect to damage in all of the pipe system.
Failure Appearance
The pipe ruptured at the 6 o'clock position (towards the plant control room). The crack
orientation was in the pipe axis and the pipe body unfolded. In the lower section of the girth
weld the crack branched. Beyond the branching the crack ran on above and below the girth
weld. The macroscopic appearance of the rupture surfaces and the crack mouths of the two
cracks are an indication of the ductile nature of the rupture in this area.
The macroscopic result of the upper girth weld examination confirms the diagnosis of ductile
crack rupture. As opposed to the lower girth weld this crack did not branch, but ended short
off the weld.
The inner pipe surface has a dark-grey oxide film (Figure 2. In the crack area this film is
absent. By appearance it is a magnetite layer which spalled in the section of maximum deformation during unfolding of the pipe.
A part of the outer surface of the failed component displays grooves oriented in the direction
of the component circumference. It is obvious that the component was ground in the girth
welds area in the course of non-destructive examinations.
Failed Component Data
The failed pipe line section consists of a vertical pipe between the girth welds 52 (connection
upright pipe bend/failed component) and 52 (connection failed component/transition cone to
trip valve casing) of the right reheat line. The two girth welds 51 and 52 were rehabilitated in
1990.
Dimensions
length of failed pipe section
inner diameter
minimum wall thickness
835 mm
150 mm
13 mm
Material
4 MoV 6 3 ( a molybdenum-vanadium steel)
Operating conditions
pipe line medium
operating temperature
operating pressure
operating hours
steam
approx. 525 C
approx. 104 bar
approx. 217,000
303
EXAMINATION PROGRAM
The examination program established by the Working Group for the first phase of the failure
examination included an as-is condition report an a non-destructive examination of the failed
component plus other relevant components of the pipe system.
Non-Destructive Examinations
Failed Component - Measurement of circumference, UT volumetric measurement, radiographic examination, material determination, surface microstructure examination, UT wall thickness measurement.
Cone between failed component and trip valve casing - Material determination, surface microstructure examination, UT wall thickness measurement.
Reheat line left side, straight pipe upstream of trip valve - Same examinations as for failed
component.
Reheat line right side, bends #1 and #2 upstream of failed component - Material determination, surface microstructure examination, UT wall thickness measurement, measurement of
circumference/ovality.
PERFORMANCE OF EXAMINATIONS
Material Determination
With random checks the metal alloy was determined by way of radiographic fluorescence
analysis.
Measurement of Circumference
The circumference was measured with a flexible metal gage.
Surface Microstructural Examination
Description of Damage-Classes
Practical experience of field metallography with replicas has indicated that the damage classes
are not appropriate as published by VdTV and VGB. In particular the damage Classes 2 and
3 do not allow to differentiate as required. In accordance with a long lasting practical experience the damage classes were newly defined for the present guideline, as below:
assessment class structural- and damage conditions
0
as received, without thermal service load
1
2a
2b
3a
3b
The surface microstructure was examined by way of replica of ground and polished areas.
304
Polishing technique
Etching agent
Replica
Evaluation
electrolytical
3 % alcoholic HNO3
transcopy, gold-doped
light microscope
305
in all cases the weld metal of the two girth welds displays a microstructure according to
assessment class I;
the areas between 6 o'clock and 12 o'clock of the pipe circumference has a microstructure
of assessment class 3 a;
the microstructure in the areas between 3 o'clock and 9 o'clock of the pipe circumference
is of the assessment class 2b;
the strongest microstructural damage was found in the 6 o'clock position of the pipe
circumference 220 mm away from the lower girth weld - assessment class 3b. This is also
the area with the largest crack opening.
US Wall Thickness Measurement
The wall thickness is measured at the replica locations. The following observations are worth
noting:
The wall thickness is reduced within the crack area at 6 o'clock along all of the pipe axis;
the maximum thickness is found in the 3 o'clock and 9 o'clock positions.
The wall thickness ranges between 11.3 mm and 14.6 mm. The cause for this wall thickness
variations may be facing of an oval pipe. There may be a relation between the degree of microstructural damage and wall thickness variation.
UT Volumetric and Radiographic Examinations
The UT volumetric and the radiographic examinations did not detect any indications in the pipe
wall. Discrete indications on the inner pipe surface are caused by the oxide film.
After removal of the internal and external deposits within the destructive examination of the
failed component a surface crack test was conducted.
Hardness Test
The hardness test was performed at the locations of replica. The following observations are of
interest:
In the crack area in 6 o'clock position the material hardness was reduced;
the hardness values range between 133 and 153 HB with a minimum at 110 HB. The
total hardness level is at the lower limit of the hardness range to be anticipated.
The hardness was measured on metallographic specimens within the destructive examination of
the failed component.
FAILURE CAUSES
Prior to and during the war, in German power plant construction 'economy steels' were ap
plied. At the beginning of the fifties, when the fact that in Great Britain an economy steel alloy
ed with only vanadium and molybdenum was successfully applied extensive creep strength
examinations were performed. The material revealed excellent high-temperature strength des
pite the low alloy components as compared to 11 and 22. However, operational experience
with this material lacked. When around 1960 the first contracts for pipe lines of this material
were awarded. Experiments as to the best heat treatment for pipes and forgings were started,
also the development of welding fillers. However, the difficulties for heat treating the pipes
306
became so enormous that a (public) meeting of the VGB Materials Committee was called in
1963. Despite the knowledge andfindingsgathered at this meeting it took almost another ten
years until unanimity was reached on the adequate heat treatment of the tubes. Heat treatment
data for forgings differed, but pipe manufacturers established also differing parameters over the
years. It took a long time until it was clear how sensitive the material was with respect to cold
forming or manipulations such as heat treatment. The greatest difficulty was to harmonize then
theoretical knowledge and operational limitations. It absolutely happened in those days that
only stress relief annealing was performed instead of a heat treatment.
There are strong indications on the failed tube that an inadequate heat treatment of the tube
contributed to the failure.
INSPECTION AND SCOPE
Safety requirements and the necessity of a high availability, especially in case of industrial
plants, may require an extended scope of inspection.
Within the inspections an extraordinary amount of creep damage was detected. As a
consequence, the scope of inspection was expanded with the objective of a thorough
assessment of the plant condition. The knowledge base was substantially improved.
RECOMMENDATIONS
The systematic detailed analysis of the hot reheat line allows the following recommendations
for pipe line systems operated for long pe-riods of time, > 150,000:
Only component metallography may give early, reliable indications as to creep damage
initiation.
If a certain degree of damage is detected, the following measures are recommended
considering the intended future mode of operation:
Monitoring of continued operation and replacement of component during next overhaul
of the plant with prolonged operating time.
Load reduction by decreasing the temperature and/or pressure and repeated inspection or
replacement during next overhaul.
When replacing individual components care shall be taken that even straight pipes may be
highly loaded and therefore damaged.
307
309
ABSTRACT
The Welding Expert System SES was developed with the main purpose of helping people involved
in welding procedure qualification and selection of qualified procedures. The generation or
selection of the procedures through the SES is made in accordance to the ASME code, Section LX
and several project codes applied to process plant. Few data entries are required to generate a new
welding procedure or to select a qualified procedure from its database. These data entries usually
are the base metal specification and thickness.
Besides the essential standard variables, the SES achieves the procedure considering the
environment constraints imposed to welding joint in work. Hydrogen damage, Stress corrosion
crack (SCC) and weld decay are same process plant environment constraints analyzed by the
system .
The development of this Welding Expert System is justified by the requests of welding experts and
the need to meet quality and productivity improvement goals, required by industrial sector in
Brazil.
Key words: Artificial Intelligence, Expert System, Welding Technology, Qualification.
( * ) Project supported by CNPq/RHAE n 610094/93-9 and FINEP n 56.94.0274.00.
310
INTRODUCTION
Technological developments require great efforts and financial resources. However these
resources are scarce, mainly in developing countries. Large amounts of these resources are spent
in training and in the development of expertise. The Expert Systems Technology (ES) offers an
important alternative to disseminate this expertise(1).
This alternative also supports the major emphasis Brazil has placed on increasing the quality,
specially for industrial production. The use of an ES tends to reduce costs, to standardize
procedures and to facilitate the information storage and retrieval, factors that appears in any
quality management system.
Welding technology comprises several areas of knowledge, and mastering it requires a great
number of experts in these areas. Thus, the development of an ES applied to the welding
technology becomes an important tool in the dissemination of knowledge when human resources
are scarce.
In this context and supported by the strategic importance of the ES development, the Welding
Expert System (Sistema Especialista em Soldagem - SES) was structured. It is capable of
generating Welding Procedures Specification (WPS) and managing a qualified procedure database.
The initial goal of the SES is to deal with welding procedures for carbon steel, alloy steel and
stainless steel base metals. Shield metal-arc-welding(SMAW) and gas tungsten-arcwelding(GTAW) are the welding processes concerned in this stage. This arrangement fulfills
almost all welding procedures demand for assembly and maintenance of process plant.
The system generates WPS in accordance to ASME code, Section IX and PETROBRAS N-133
standard(2'3). Also project codes like ASME VIII, ASME I, ANSI B31.1and ANSI 31.3(456>7)
requirements for welded joint are attended. Filler metal specifications in procedures generated by
SES are in accordance with AWS/ASME section II(8). This structure is, certainly, one that better
represents the knowledge concerning welding technology, since the most important welding
parameters are included.
THE KNOWLEDGE DOMAIN
The generation or selection of a welding procedure requires the knowledge of several areas of
expertise. Welding process, metallurgical properties and features of both filler and base metals and
the welding metallurgy are some of these areas. Furthermore, the procedures should be qualified
according to the appropriated standards. These standards and codes are concerned chiefly with the
mechanical performance of the welded joint. However, considering the aggressive environment to
what the welded joint will be exposed to, the metallurgical performance is an important
information to prepare fitness procedures to usage. Certainly the welded joint is the most critical
311
region in equipment working in an aggressive environment^. Beside the qualification codes, the
properties and features of the base and filler metals and the welding metallurgy knowledge, there
are also information that makes SES capable to adjust the welding procedure parameters
according to the environment and its corrosion process. This knowledge includes the corrosion
process like Hydrogen damage and stress corrosion crack (SCC) in carbon steel welding, welding
decay in stainless austenitic and ferritic steel, knife corrosion line in stabilized austenitic stainless
steel and the usual corrosion processes for other standard base metals(10,11).
Although it is possible to achieve a procedure from de SES with few data entries (usually the base
metal specification and its thickness), the user may, at his convenience, modify the procedure
parameters that are being prepared by the system. Therefore, the procedure may be totally
developed by SES or developed interactively with the user. This configuration makes the system
more dynamic, since each procedure parameters can be evaluated and changed by user according
to his convenience. However, it is necessary aflexibleinference engine and an enlarged knowledge
base capable of evaluating any parameters change. SES warns the user about the feasibility
changes and, if necessary, SES points out any more parameters changes required to reach the
welding properties and qualification. For example, if the user changes a basic filler metal selected
by the system to weld a carbon steel subjected to hydrogen crack by the welding process, the
system will display a cold crack risk warning. In this case, if the user keeps his new option, the
system will change parameters like preheating and interpass temperature to avoid the cold cracks,.
This arrangement and knowledge base also converts the SES in a tutor capable to transfer its
expertise knowledge to a user, more than just preparing a welding procedure.
Knowledge is completed with quality control techniques at the execution level. Filler metal storage
and handling, joint preparation, welding and heat treatment techniques are supplied by the system
when the welding procedure specification (WPS) is issued.
SYSTEM DESCRIPTION
The artificial intelligence and expert systems technologies have been used in a large number
of applications related to industrial problems ' . Both try to simulate or emulate the human
intelligent behavior in terms of computational process. Specifically, knowledge-based systems and
expert systems try to reproduce the performance of a highly skilled professional in a specific
problem solving task . The main advantages of these technologies are to preserve and distribute
the human expert knowledge.
The nature of the welding procedure qualification doesn't present a pre-determinate solution
method, therefore it fits perfectly as an expert system application. This occurs because the solution
of this task doesn't need only the information contained in codes and standards, but needs also the
experience of a welding expert for correct manipulation of this information and search for a better
solution.
The SES structure is shown in Figure 1. It is composed of the following modules:
312
Database: Database is the manager module of WPS/PQR base. It allows the user to see and to
print these documents, and delete them, if the user has this permission. As these stored documents
belong to the company that generates them, it is possible to eliminate them from the WPS/PQR
database just using passwords.
Knowledge base: The representative AI method used to organize the domain knowledge was a
"production system" with a forward chain method of inference, where the knowledge database
consists of rules, called production rules, e.g.,
IF
standard is ASME IX
standard is PETROBRASN-133
project standard is ASME VIII
number is 1
thickness >20 mm
thickness <30 mm
carbon-equivalent > 0.45
carbon-equivalent <0.47.
AND
AND
AND
AND
AND
AND
AND
THEN
This choice was made because production systems offer good features in terms of modularity and
uniformity. The forward chaining is applied because the WPS generation starts with little available
information and tries do draw a conclusion that could be appropriate to do the new WPS.
The knowledge module of SES gives an overview of knowledge base content, comprehended in
various files. Then, is possible to see the production rules, the facts about filler metals and base
metals (extracted from welding codes) stored in the system. The main objective of this module is
to disseminate the knowledge acquired during the system's development, and the information
contained in standards related to filler metals and base metals.
Qualification: This module contains the inference engine, which tries to make new WPS/PQR
documents. The rule and facts of knowledge base are used with this objective, as well as they are
also used to do an intelligent search in WPS/PQR database and to manage the explanation
facilities. The intelligent search seeks a qualified WPS stored in the database, that satisfies the
conditions of a new welding process requested by the user. If this qualified WPS doesn't exist, the
system generates a new WPS to be qualified in laboratory, which contains all information
concerning to how do to the test coupon. The explanation facilities explains how a determinate
solution was obtained, during WPS elaboration, to assure the reliability of conclusions showed by
the system. See "Procedure Selection and Generation Systematic" in this paper.
313
User interface: This module provides a user friendly interface for the user, where it is possible to
consult the WPS/PQR data base, information about base and filler metals, and to generated a new
WPS/PQR.
Expert inte rface : A password protected interface where the expert can change parameters used
by the rules in order to adapt the knowledge stored in the system, providing data and tools for the
knowledge base evolution.
SES
USER
--
>
USER
INTERFACE
EXPERT
INTERFACE
1
WELDING
* EXPERT
QUALIFICATION
MODULE
(inference engine)
1
'
DATABASE
ji
KNOWLEDGE
BASE
The SES was developed in 2 years by a team of three knowledge engineers and two welding
experts. The tool used in the development was a PROLOG interpreter/compiler. It runs on IBMPC compatible machines under MS-Windows environment.
A qualified Welding Procedure Specification (WPS) is intended to guarantee that the required
mechanical properties will be reached when welding a joint. In the same way, the qualification of
welders and operators is intended to assure that personnel are able to weld a specific joint
properly.
Project, construction and assembly standards and codes present mandatory requirements for
welding qualification and welder performance qualification. However, procedure qualification and
314
workmanship skill certification are basic for a Quality Assurance System in any production
activity.
Welding procedures are qualified according to these standards, when a test specimen(welded
according to the pre-established parameters in the procedure) provides the required properties,
for its intended application. These qualified procedure parameters determine the welding
procedure application fields through the essential variables range established in the codes. Thus, a
procedure qualified for carbon steel base metal, whose P- Number is 1, may always be used to
weld all P- Number 1 base metal. All others essential variables, like F- Number, A- Number,
thickness and pre-heating and treatment temperatures(2), have to be considered.
This systematic, established by welding qualification codes, results in a large application field for a
PQR Even if a few numbers of PQR are required to weld a variety of joints, a careful and
arduous selection from a procedure database is required. Considering these aspects of codes, the
SES was designed to minimize the routine work of procedures development by making an
intelligent WPS selection from its qualified procedure database. The WPS database stores
qualified WPS and PQR where the search and selection are made according to PQR variables.
The WPS qualification and search module are detailed in Figure 2. According to this flow chart, to
retrieve some WPS is required the base metal specification, qualification standards and welding
process. These data entries are presented by the system as options. More data entries may be
requested by the system as environment conditions that the welded joint will be subject and others
physical constrains. After this search, all available qualified WPS are pointed out to user
acceptance, or the Qualification Module becomes active when a new WPS is required.
Generation of a new WPS starts with the filler metal selection. The system points out the most
suitable filler metal and others that are possible to weld. Other filler metal may be chosen by the
user, from the tabulated AWS filler metals from the system database. In call case, the system
points out special conditions to apply other filler metal than the indicated by the system, as well as
the metallurgical and mechanical constraints, are given through warnings.
When a WPS is generated, all procedure variables may be modified by the user into the Updating
Variables Module. For each variable updated, the system will provide warnings when the change
could result in a pour WPS. At least all documents required to qualify the welding, are emitted:
WPS, PQR Test Specimen Preparation and Weld Instructions.
The tests and results of analyses are input into the system through the Results Data Entry Module.
Quantitative results are analyzed and approved by the system although qualitative results must be
approved by the welding inspector. As may be inferred from flow chart, the SES structure was
developed to be able to qualify or make a WPS intelligent selection with few data entries. Base
metal specification and thickness are usually enough. Thus, if it is necessary to modify any variable
of WPS established by the system to adequate it to a specific usage, each parameter is reviewed by
the system.
315
BEGW
i:
Qualification
DataBase
Knowledge
Base
''
Base
Metal
Rules
Melalurgic
Analisis
Rules
Other
Rules
END
316
CONCLUSION
Assembly and maintenance welding plays an important role to reliability and safety in process
plants. SES was developed with the main purpose of improve the work of welding experts and the
personnel in charge of equipment integrity, and disseminating the welding knowledge and
technology. Reaching theses aims, SES surely is contributing to improve that reliability and safety.
The SE S knowledge base, the inference engine, and the database design allow the system
improvement beyond the initial scope as well as the use of different base metals, dissimilar welding
and process combination, becoming SE S a modular system capable of being enlarged to attend
specific necessities of the user.
ACKNOWLEDGMENTS
Several people and institutions gave important contributions to the system development. We want
to express our gratitude to TECPAR and PETROBRS for the support.
REFERENCES
1-
Barborak, D.M.; Dikinson, D.W.; and Madigan, R.B. 1991. PC-Based Expert
Systems and their applications to welding. Welding Journal 70 (1): 29-s to 38-s.
2-
ASME Boiler and Pressure Vessel Code, Section IX, 1992 Edition. Q ualification
Standard for Welding and Brazing Procedures, Welders, Brazers, and Welding
and Brazing Operators, American Society of Mechanical Engineers, New York, NY.
E
P TROBRS N-133, January 1995. Soldagem. Petrleo Brasileiro S.A Rio de Janeiro,
Brazil.
4-
ASME Boiler and Pressure Vessel Code, Section VIII, division 1 and 2, 1992 Edition,
Pressure Vessels. American Society of Mechanical Engineers, New York, . Y.
5-
ASME Boiler and Pressure Vessel Code, Section I, 1992 Edition, Power Boilers. American
Society of Mechanical Engineers, New York, NY.
6-
ASME Code for Pressure Piping B31.3, 1993 Edition, Chemical Plant and Petroleum
Refinery Piping. American Society of Mechanical Engineers, New York, N.Y.
317
7-
ASME Code for Pressure Piping B31.1, 1993 Edition. Power Piping. American Society of
Mechanical Engineers, New York, N Y .
8-
ASME Boiler and Pressure Vessel Code, Section II, 1992 Edition. Materials
Specifications for Welding Rods, Electrodes and Filler Metals. American Society of
Mechanical Engineers, New York, N.Y.
9-
Metals Handbook, 9th edition. Volume 13 - Corrosion. 1987. American Society for
Metals.
10- Metals Handbook, 9th edition. Volume 6 - Welding, Brazing and Soldering. 1983.
American Society for Metals.
11-
319
1.ABSTRACT
This paper presents an overview of integrated and intelligent CAE systems for nuclear
plants. We have integrated two-dimentional CAD systems, three-dimentional CAD
systems and a nuclear power plant database system. We have also developed an
automated routing system and design check system. The design and engineering of a
nuclear power plant covers various technical fields and the information which is created in
many fields is exchanged and utilized in parallel. As Computer Aided Engineering(CAE)
is applied for several plants, huge and diverse information has been accumulated on the
Data Base Management System(DBMS). TOSHIBA has integrated CAE systems to utilize
the reliable information and to make decision efficiently. We have integrated existing
two-dimensional(2D) CAD systems, a three-dimensional (3D) CAD system and a
relational database system which stores engineering information such as design conditions,
maintenance histories and inherent properties. As a design automation system, we have
developed an automated design check system. These systems are the main parts of the
plant engineering framework, and are utilized in the practical design. This paper
describes a situation that TOSHIBA has been promoting in order to improve user interface
in integrated environment and to replenish intelligent applications specialized for the nuclear
engineering.
2.INTRODUCTION
A nuclear power plant is composed of a large number of equipments, pipings and so
on. It takes 8 or 9 years to complete the design and engineering from the beginning to a
commercial operation. In this term, careful engineering is required to conform to strict
320
design standards regarding safety, reliability, reduction of radiation exposure and so on.
On the other hand, reduction of engineering term is indispensable to cut down plant costs
and a construction period. Furthermore operating plants are to be kept on maintaining and
improving for more than 30 years. TOSHIBA has constructed and applied distributed
processing systems by engineering work stations and network systems around the integrated
DBMS for the large scale plant engineering. TOSHIBA has promoted improvement of
reliability and efficiency of the common information usage in many fields in parallel.
3.CONCEPTS OF SYSTEM DEVELOPMENT
TOSHIBA has been developing CAE systems according to the following three concepts
on the basis of rich experiences about plant engineering, construction, maintenance and
know-how for computer applications.
(1 ) System Integration
(2 ) Engineering Visualization
(3) Information Processing Automation
That is, various local systems are integrated to improve the efficient information usage.
Comfortable user environmnt is supplied by a visualized user interface on the basis of
recent progress of computer graphics technologies. Furthermore, informaton processing
is automated to improve the reliability and efficiency of engineering. Depending on these
concepts, TOSHIBA is promoting the improvement of design quality and engineering
efficiency.
3.1 Plant Engineering
3.1.1 Overview
Figure 1 illustrates a typical engineering process in a nuclear power plant. Plant
engineering of this type consists of various processes; for example, project management,
design, manufacturing, construction, plant operation, maintenance process and so on.
The main features of nuclear power plant engineering are as follows.
(1) A plan spans more than eight years from the initial stage to full commercial operation,
and a nuclear power plant operates for more than 3 0 years. It is very impotant to manage
not only the design data but also historical data such as maintenance and replacement
records.
(2) There are many kinds of components including pipings, mechanical equipments,
valves and so on. Furthermore, there are strict design standards and constraints as
regards plant safety, reliability, minimizing radiation exposure, etc. Thus, the design
verification plays a very impotant role.
321
(3) A number of companies and departments within companies share the plant engineering.
Thus it is essential to set up an information infrastructure and measures to provide data
security.
Plant design can be characterized by certain special features as itemized below.
(4) Before the introduction of 3D CAD systems, scale plastic models were used for the
design and review process. Layout designers also used 2D CAD systems for drafting,
but it is only recently that 3D CAD systems have come into gradual use.
(5) Plant design is closer to VLSI design than to mechanism design; system design takes
place before layout design is considered. In some cases, the layout design process
requires that alterations be made to the system design.
It is desirable to set up an engineering framework which takes account of these features
and requirements.
3.1.2 Design Process
The design process is divided into a number of design phases, as shown in Figure 1.
Of particular importnce are the system design and layout design processes. The following
is an overview of both these design stages.
(1) System Design
It is very common in VLSI design to carry out the system design process before
beginning layout design. In case of VLSI, system design includes functional design,
logic design and circuit design. In the case of mechanical design, however, after the
specifications have been given, the designers begin by drawing 2D views or creating 3D
models using their experience and knowledge. There are few function description
languages or symbolic representation methods for this process. In the design of a nuclear
power plant, several different types of the system have to be considered at this stage.
For example, LPCS stands for the Low-Pressure Cooling system, one of the critical
cooling system. At the system design stage, a Piping and Instrumentation Diagram(P&ID),
which is similar to a circuit, is created. System designers determine the specifications of
equipments, pipings, valves, etc. at this stage. After carrying out certain simulation,
the design data are sent to the layout designers. In many cases, data from earlier plants
are reused after modification by the designers. For this reason, proper data management
is very important.
(2 ) Layout Design
322
Using the system design data supplied to them, layout designers determine the position
of equipments and then piping segments, elbows, valves, etc. Piping layouts are
created essentially by designers, whereas some automated CAD systems are used in VLSI
design. Wiring patterns are relatively simple in comparison, so several layout algorithms
have been developed. In piping design, however, the difficulty is how to lay out all the
piping according to the very strict design conditions. There are interactions with the system
design process, just as in VLSI design. At the system design stage, only schematics are
developed; that is, the physical dementions are not taken into account. For example,
when it is necessary to add a new pipe or to change the piping order, a layout designer
has to return the alteration to the system designers.
3.2 System Integration
The design and engineering of nuclear power plant covers various fields and it is closely
connected each other. The data which is created in each system is used as input data for
other engineering fields. Reliability and consistency of data which is exchanged in parallel
and sequentially in many fields are greatly required for the engineering of a nuclear power
plant. TOSHIBA developed the Nuclear Power Plant Database Management
System(PDBMS) as the core for unified information management and information exchange.
(See Figure 2) PDBMS has the following specific features.
(1) Open DBMS which is composed of multiple files and managed by the Master Parts
List
(2) File units which conform to actual engineering routine and configration of local
system
(3) Strong administration for consistent information by history management and
discrepancy detection
Figure 3 shows the relation between PDBMS and various data sheets for equipment,
valve and so on. Being different from other stand-alone P&ID CAD systems, the
information other than attributes of P&JD is referred from external DBMS in cooperating
copmanies. For example, in the case of a valve list, attributes of P&ID are sent as
design conditions to the cooperating companies. After completion of valve design, detail
information of valve is sent for approval from them through the network to the PDBMS in
TOSHIBA. Discrepancy detection of information is performed automatically at every data
exchange.
4.DTEGRATED CAE SYSTEM
4.1 3D Arranrement Ajusting System
In a nuclear power plant,
lots of equipment,
pipings,
323
324
325
326
of pipings such as design information and operating condition. The corresponded piping is
displayed on P&ID system. All kinds of information regarding maintenance history and
inspection results can be referred parts by parts. Figure 12 shows the situation in case that
information such as operating condition and inspection history regarding erosion corrosion
are displayed.
5.INTELLIGENT CAE SYSTEM
Two Intelligent CAE Systems are used for design automation. Piping design of a
nuclear power plant comprises two phases. One is the model generation phase, and this
is followed by the constraints checking phase.
5.1 Automated Routing System
As part of our effort to develop a design automation system for plant piping,
developed a prototype of an automated routing system.
we
5.1.1 Overview
The piping layout problem is similar to that of VLSI layout. One way to solve the
VLSI layout problem is by mathematical algorithms, but this approach proved impossible
to apply in the case of the piping layout problem because of the strict design constraints.
Therefore, a more flexible and intelligent system was required. We have developed a
knowledge-based system using a Lisp-based expert shell. We call this the production
system.
5.1.2 Basic Method
The following is an outline of this new system. Unlike the traditional approach-in
which piping must avoid previous laid out piping-the new approach searches for the optimal
route first without considering the other piping. Then adjustments are made to all the
piping in the object area. In this way, dividing the design process into a routing step and
an adjustment step, it is possible to complete the layout design without it depending on
the routing order.
5.1.3 Result
The layout results obtained when this system was applied to a simplified nuclear plant
model agreed well with the practical layout design. Though the propriety of this layout
strategy was verified, it was not in fact used as a practical design tool. It did, however,
enable us to acquire methods for developing a knowledge-based system.
327
328
Designers have long performed design checks using scale plastic models and/or CAD
models. The following are examples of the items checked in this case.
(1 ) Data Discrepancy
Layout design is carried out using P&LD data, which are the result of system design.
Piping layout design is performed manually by designers, whereas VLSI design is done
automatically. For this reason, there is a possibility of discrepancies occurring in the
order of objects between the layout design result and P&ID. Such discrepancies
sometimes occur after a design modification. If modifications to the layout design are not
reflected in the system design, this will lead to a problem. This check is very important,
since it affects plant safety and reliability.
(2 ) Design Constraints
Drain/vent pipes are critical components. The function of drain and vent pipes is to
remove water and air, respectively. There are some conditions specifying how drain/vent
pipes should be laid out. If a vent pipe is not at a global or local high point in the piping,
air will not be removed and the equipment will be affected. What is necessary is to check
whether it is possible to remove water and air actually. Sometimes a vent pipe is set at
another pipe from which branches the object piping segment Therefore, a tail recursive
search for piping is necessary. Figure 15 gives an example of a vent pipe check result.
(3) Numerical Analysis
It is also very important to carry out numerical analysis, such as eigenvalue analysis,
earthquake response analysis and thermal stress analysis. Tt is easy to obtain the
coordinates of a piping segment and the position of an object on a pipe, but it is awkward
to automatically create analytical models of specific objects such as valves, supports and
nozzles. We have developed a library which stores default parameter valves. All
elements and nodes are represented as Instance objects. Figure 16 shows .an example of
an eigenvalue analysis model and its result.
6.SUMMARY AND CONCLUSIONS
In this paper, we have outlined an integrated and intelligent database system which
forms a plant engineering framework. By integrating existing CAD systems and database
systems, we have made it possible to develop an efficient engineering environment. As
for design knowledge, we adopted object-oriented programming as the knowledge
representation method. We analyzed the hierarchical structure of the plant and the
knowledge related to each object, and then represented these, respectively, using a
Class/Instance structure and Method. We developed an automated design checking system
as one application of the technique. These integrated and newly developed systems are
used in the practical design. We have begun to develop a mechanical/electronic design
framework based on this approach.
10
329
REFERENCES
(1) Kolodner, J. , L. , Simpson, R. , L. , and Sycara, . , 1 9 8 5 , "A Process Model
of Case-Based Reasoning in Problem Solving", UCAI-85, pp. 2 84-2 9 0
(2) Sakamoto, . , et al. 1989, "PLEXSYS:An Expert System Development Tool for
Electric Power Industry - Application and Evaluation", Proceedings of EPRI Conference
on Expert Systems Applications for the Electric Power Industry
(3) Machiba, . , Sasaki, . , 1990, "Toshiba CAE System for Nuclear Power Plant",
Proceedings of SNA' 90, pp.425-4 30
(4) Kim, W. , Banerjee, J. , Chou, H. and Garza, J. , F. , 1990, "Object-Oriented
data base support for CAD", Computer Aided Design, Vol. 22, No. 8, pp. 4 6 9 479
(5 ) Narikawa, . , Sasaki, . , et al. , 1991 "An Automated Layout Design System
for Industrial Plant Piping", Proceedings of ASME Computers in Engineering Conference,
Vol. 1 pp. 1-6
(6) Hardwick, M. and Downie, . , R. , 1 9 9 1 , "On Object-Oriented Data Bases,
Materialized Views, and Concurrent Engineering", Proceedings of ASME Computers in
Engineering Conference, Engineering Database: An Enterprise Resource, pp. 93-97
(7 )
Kannapan, S. , M. and
Marshek, . , . , 19 9 1 , "Engineering
Design
Methodologies: A New Perspective", Intelligent - Design and Manufacturing, Edited by
Andrew Kusiak, WLLEYTNTER SCIENCE
. . .
.
(8 ) Sheth, S. , 1 9 9 1 , "Product Data Management and Supporting Infrastructure for an
Enterprise", Proceedings of ASME Computers in Engineering Conference, Engineering
Database: An Enterprise Resource, pp. 65-69
(9) Narikawa, . , et al. , 1992, "A Computer Aided Layout Design System for Plant
Piping", The 1 st JSME Conference on Design Engineering in System
(10) Sakamoto, . , et al. , 19 92, "A Knowledge Based System for Nuclear Plant
Design Support", Proceedings of ICHMT 2nd International Forum on Expert Systems and
Computer Simulation in Energy Engineering
(11) Abe, . , Sasaki, . , et al. , 19 92, "Toshiba Integrated Information System
for Design of Nuclear Power Plants", Proceedings of the 2nd SME/JSME International
11
330
pp. 711-715
12
331
Feed Back
Optical
Disk
DB Manage r
Refer
Information
Confirm
Specification
Modify
Attribute
Comparison
Design Che ck
332
r PiPll S
K&ix
3 .:'>.v'iFK3IMt
\Vii*|
XJY< Ui
& '
iMODEtlUStiS
Piping
Equipment
Support
Duct
Tray
Concrete
MiiLitiLy^-gDSlirii
Physical Interference
Imaginary Interference
aftffllS
f:Vfcij|BiV3jt!
"5:^!< ]
:
|5, :l:1f:fc
Patrol
Ope rability
Maintainability
Asse mble & Disasse mble
Carry in & out
mmm&mm
Inte rior De sign
Exte rior De sign
Yard Arrange me nt
Construction Planning
333
334
f*'.*.
335
BC*
S3WU
r-5"<-
. s a g j gaewsj J*D
M m m KMO-wo-:
jBj
v
jawBD J i O J87J
jrtt
39>t
*O
m
_ i*S
1 t>-?<i/?f
V
3)
:"-*
:"
:
:tmt
: K
* r - V-t-(UM>_MBBtt
:w
- vi: tg
;itll
:Mt
- TACO
:'*! -
:?**
: C.
l s ^ r - <ipl>
tr^T5-lT,-4)
:!
: :cxu>
HOKJE
.-:
--."
.tl
(*)
riMfflM
* r - Ts-t atom
./
- ?7.<-s
M
jam
w*
nu
Ml
roMB
- *if.S
1BMW1I1
UBWHlff
agK
maaw
tn
A
4 f S
wit-
336
fWsxfi.
sn
i ' - ' r S I -
W o . ftfcft
wHRtoacn
OP *
B , DO ( 2 ) , D T w-r,
"W
r~ B SEESElB KflIE
-r-:
ri[_._(B i l l
STH
w : c i aasjBEBa
337
CAD Data
File
Property
File
Class
System
Pipe
Valve .
GeometryFile
Method
P&ID-Check
Slope-Check
Stem-Check..
Check-Valve
Superclass
Valve
Slot
Tagliarne
Operation
Geometry
DependPipe
Method
SetDir-Check
Get-Geomtry
Get-Prop
Fig. 14
Slot
Inherited Slot
Method
fforiz-CAeck
FJoff-CecJc
I n h e r i t e d Method
Safe-Valve
Slot
Inherited Slot
Method
Sten-CnecJc
I n h e r i t e d Method
338
j j i Or
339
SP249-END-USERS RESPONSE/ACCEPTANCE
What is required? What is available? What has yet to be done?
by Hans R.Kautz, Grosskraftwerk Mannheim AG
Mannheim, Germany
INTRODUCTION
Following a proposal of 13 European partners under the coordination of MPA Stuttgart a
Sprint Specific Project (designated SP 249) has been approved and is running. The main goal
of SP 249 was to enhance the transfer of the component life assessment (CLA) technology for
high-temperature components of fossil-fired power plants assuring diffusion of modern stateof-the-art plant CLA technology among power plant utilities and research organizations in
Europe. The project addresses pressure parts operating at elevated temperature (operated in
the creep and creep-fatigue range) in fossil-fired power plants.
Figure 1 shows some essential definitions.
Many years ago, in connection with the authority requirement of retrofitting older power plants
with flue gas cleanup systems the question of remaining life and the possibility of extending the
service life of welded components operated in the creep range arose. What was the approach
of Grosskraftwerk Mannheim (GKM)? Initially, attempts were made to achieve high
availability and safety by extensive examinations and later, by way of condition-oriented maintenance to extend component life.
WHAT IS REQUIRED?
An attempt will be made to demonstrate by considerations laid down 10 years ago which possibilities are provided by the SPRINT 249 project system to solve the problem of plant life
extension. In those days, the question arose which reserves are still available in the plant
systems and how they can be activated. As hard as it is to state what portion of the service life
of a component or systems has been used (exhaustion), as difficult it is to determine the remaining life. GKM and other European energy suppliers tried in many different ways to solve
this problem.
Even EPRI (Electric Power Research Institute/Palo Alto) with the assistance of C.E.G.B.
(Central Electricity Generating Board) tried already around 1980 to develop a strategy for the
assessment of remaining life.
APPROACHES OF SERVICE LEFE ASSESSMENT
Consideration of External Loads
In former years, calculations of pipe systems were incomplete. Calculations of "as-is" pipe wall
thickness and weights were started only a few years ago. The consequences, such as maldesigned hangers, pipe line displacement, and failures as a result of overstressed pipe lines and
hangers are all too well known.
Here a brief review of the plant component design for the creep range: In the boiler equation
the ultimate tensile strength after 100,000 hours at an adequate temperature divided by 1.5 was
used since 1968/69 the design was based on the life time, e.g. of 200,000 or 250,000 hours
depending on the duration of the creep tests.
Pipe line components under longterm loads were designed against creep failure at predominantly steady-state operation and only recently also against creep failure (cycling loads) at transient stresses. Pressure surges were ignored. Such an approach - creep tests as calculation basis
(static loads, limited test time) - is inadequate for inhomogeneously stressed components, as
340
premature failures confirmed. The stipulation to consider also external loads when designing
high-temperature components under internal pressure was hoped to be the remedy. In those
days engineers believed that only in this way the life of creep-stressed components could be
extended, of course not beyond the design life. The intention was to fully use this service life
determined by calculation based on temperature, pressure strength parameters for 200,000 operating hours when running the plant at standard operating conditions (with components conforming to the drawings). Local stress peaks (exceeding the specified stresses) would reduce
the remaining life.
There is no need to emphasize that for the assessment of power plant components, i.e. their
availability and safety, but also for extending the service life, all operational loads should be
known and available for a life time analysis. This is almost impossible to achieve. For many
older plants hardly any documentation from the time of erection is available.
Life Time Monitoring Systems
In those days, more and more computer software was used for determining by calculation of
the degree of exhaustion of components under internal pressure or operated in the creep range.
The software was intended to calculate (subsequently) the used service life and the exhaustion
of different components, e.g. pipe line components. In 1978, for the first time, for the Moorburg/Hamburg power plant a report was published on longterm monitoring of boiler components by way of a process computer. Then only the material exhaustion by creep was determined, but not the stresses due to load cycling. Later, an agreement was concluded among the
technical associations in Germany on recording measured data and on a code for determining
the degree of exhaustion of pressurized components by calculation pursuant to the German
Boiler Code TRD 508. The assumption was that the component life decreases with the number
of tolerated load cycles (linear failure accumulation according to Palmgren-Miner and linear
life time according to Robinson). The damage process was assumed to start immediately after
the first load cycle, and not - as in reality - after a critical number of load cycles, and to
progress until rupture. Another assumption is that the damage due to creep and fatigue is linear
with the number of load cycles and their duration and that both damage types may be added to
a total degree of damage. The most difficult task, however, is to determine the really "heavyloaded" system points.
Damage Rules
Damage rules for multiaxial stress states, variable loads, and temperature history have their
origin with Robinson in 1937. They are expressed as summations of time ratios, strain ratios,
or combinations of time and strain ratios. Additionally, Kachanov and Rabotnov dealt with
damage functions which varied in a continuous fashion from 0 to 1 between test initiation and
failure. Coffin and Goldhoff, Abo El Ata and Finnie have provided summaries of some of the
more useful damage rules. Figure 2 is a survey of these damage rules which again give rise to
doubts as to which is the optimal rule.
It is interesting to note that the Main-Wiesbaden AG power plant decided only in the past
years to participate in the development of a life time monitoring system within a European
project BRITE in order to fully exploit - according to the state of science and technology - the
possibilities of non-destructive material examination and monitoring within the preventive
maintenance.
Nonetheless, a number of problems due to physical/metal-induced conditions occur when
recording the necessary data.
341
STATISTICS
The standard calculation methods for the assessment of the service life of components of the
superheated steam region, such as design and service life calculations and calculation of the
degree of exhaustion were considered inadequate by the German authorities. Therefore, attempts were made to implement the findings from 'statistics' in the calculation for determining
the service life considering the scatter of creep and wall thickness values (of the individual
components) and temperature etc. The application of statistical methods was intended to allow
statements as to what failure rates or possible damage had to be anticipated at a specific time.
Engineers believed in the possibility of making the complex conditions somewhat more comprehensible and to initiate systematically preventive measures.
PROBLEMS OF ASSESSING THE DEGREE OF EXHAUSTION OF POWER PLANT
COMPONENTS
The safety and profitability of a power plant are defined to a great extent by a feasible failure of
individual components. The determination of the degree of exhaustion, and thus the service life
consumed so far, of high-temperature pressurized components such as pipe lines and headers is
based principally on the following approach:
Calculation of the spent and the remaining life under creep and fatigue conditions, e.g. pursuant to the German Pressure Vessel Code TRD 508;
The importance of developing advanced methods and procedures for the service life calculation
or estimation was also stressed in the course of various conferences and in publications. However, it must be remembered that novel and more detailed methods for life assessment are
difficult to introduce in the field. In most cases this is due to the fact that the required data
and/or the personnel necessary for performing the sophisticated analyses are unavailable.
Field experience and case studies demonstrated that the factors listed below resulted in huge
errors in the prediction of the remaining life so that the whole procedure becomes more or less
useless. The factors are:
Scatter of material data, local material inhomogeneities or differing material
properties (e.g. due to heat treatment or welding),
unknown additional forces or moments,
differences in design and as-is geometries,
uncertainties in the determination of temporary and locally variable operating
parameters such as pressure and/or temperature,
conservative strength and damage hypotheses,
design errors predominantly due to wrong assumptions or - in other words due to a "lack of engineering expertise and intuition" or "deviation from approved engineering practice".
Table 1 shows the effect of these influencing factors on the plant and component life.
342
Many years ago a paper was presented with the statement:" There is presently no accurate
procedure of determining the "remaining life", even though "methods" are available. The remaining life of a system operated in the creep range cannot be extended beyond the design life.
A component may be used within the specified life only, if the actual operational stress does
not exceed the design stress.
MICROSTRUCTURE
One method of assessing the component condition was very successful during the past years.
Over the years the micro structural changes occurring in the course of operation gained more
and more interest. In 1969, with an attachable microscope the first in-situ microstructure
assessment was performed in the Hamm-Uentrop power plant (prior to commissioning). The
adoption of replica taking represented a considerable progress. In the past years this examination method was accepted more and more, and the Guidelines for the Assessment of the
Microstructure and Damage evolution of Creep-exposed Materials for Pipes and Boiler
Components were refined. Table 2 shows the assessment classes and their definition.
GOALS OF SP 249
However, many past conferences and publications revealed how difficult it is to assess the
materials condition and thus the degree of exhaustion of a component, if engineers fail to
establish a system to organize the huge quantity of acquired data, to compare the meaningfulness of examination techniques and results, to harmonize the differing level of understanding
in power plant engineering (uneven distribution of experts/resources in Europe and all over the
world) and thus to find a common language. It is this difficulty that enhances the development
and implementation of knowledge-based systems for assessing the remaining life.
THE TURNING POINT
When GKM was asked some years ago by the Stuttgart Materials Testing Institute (MPA) to
participate in the development of a computer-aided management system, e. g. a knowledgebased system for assessing the remaining life of power plant components the consent was given
without really knowing, whether this effort would meet success, but with the hope of finding
an improved solution for the problem.
In these cases expert or knowledge-based systems of similar software may provide an effective
support for engineers concerned with these problems. This is the basis of developing e.g. the
Expert System for damage analysis and determination of Remaining life (ESR) at the Stuttgart
Materials Testing Institute.
USER INTEREST IN KNOWLEDGE-BASED SYSTEMS
In a power plant like GKM the interest in technical development and the efforts to improve
continuously safety and quality of components and materials are as important as the availability
of the plant. Human expertise of different specialists is essential for the assessment, but is often
unavailable in the plants the very moment it is needed. Thus the plant engineers are often facing
the question of what to do with the component and/or the plant (e.g. shut down and reinspect,
reduce load, pressure, temperature etc.). In the field of power plant instrumentation and
control GKM makes all efforts to hold the leading position in modern techniques. The idea to
include an expert system which could support the departments of construction, calculation,
maintenance, and others should be one of the goals. Therefore, GKM decided to sponsor the
project financially and to provide expert knowledge so that the system would be capable of
satisfying the following requirements and needs of the company:
343
Improve safety and availability of piping systems first, and also of other components in
the future
support and help personnel in performing usual maintenance tasks;
support and help personnel when dealing with specific subproblems;
facilitate search and use of necessary documentation;
save and reuse knowledge of human expert (personnel);
preserve experience and knowledge gained during manufacturing/construction phases;
perform strategic analyses and analyses of case studies;
evaluate better single aspects of new cases and compare them with stored ones;
use both "plant specific" and "plant neutral" data.
WHAT IS AVAILABLE?
Knowledge Based System (KBS)
The current status of the SP249 system includes the following methods, procedures, and in
formation:
Generic guidelines for CLA
Overall structure of the system
Object management modules
Advanced Assessment Route
Case history management (with about 100 case histories)
Documentation management (with all CLA generic guidelines and associated standards
and codes like DIN, TRD, ASME, VGB and NT standards)
Material database (with the relevant standards ISO, DIN, BSI, ASTM and other ma
terials)
-Parameter calculation
Hardness calculation
TULIP (Tube Life Prediction)
Case history selection and management
Crack dating
SP249 remanent life calculation
Inclusion of oxidation effects
SP249 material database
Inverse stress calculation as per German Boiler Code TRD
Creep and fatigue usage calculation as per TRD
The modules yet to be developed are:
Cavity density measurement
Linear extrapolation
Influence of chemical composition on the remaining life
Failure assessment
Training materials for CLA and for SP249 KBS
SP249 KBS implemented at all participating utilities.
Although not all of the methods and procedures listed here will find the approval of the util
ities, they were included as state-of-the-art methods.
As an example: Some years ago the assessment models for the determination of the remaining
component life were discussed. Theoretical models of the remaining life assessment are based
on a combination of mathematical creep curve description and metallurgical variables such as a
quantifiable degree of failure and the distance between particles. The practical application so
344
far is limited to some experimental results; a systematic use is still to be expected. As a result
of uncertainties and necessary simplifications of the model bases and of the scatter of the
measured results it is unclear how to assess the life span realistically. The following models are
candidates: The Needham model, the -parameter model pursuant to Shammas, the pparameter model according to Riedel, the A*-parameter model pursuant to Eggeler, and the
particle spacing/EPRI model. We found that only the model for particle spacing might be
suitable for practical application.
In any case, the advanced assessment route - of which other authors already reported and
which will be briefly discussed later on - is the link between the theoretical knowledge base and
the case studies.
BENEFITS OF SP 249
Maintenance. Remaining Life. Costs
Statistics shows that presently 50 to 70 % of the German power plants exceed their design life
of approx. 20 years. Toward the end of the century, 15 to 25 % of the power plants will have
reached the 40 years limit and have to be repaired according to condition to mitigate or
eliminate old design-induced or operation-induced mistakes to continue operation, unless new
plants are constructed. This means also that the service life of a number of power plants must
be extended again in order to maintain the energy supply. The expenditure for maintenance
increasing in the course of life extension over the entire service life must be assessed, however,
under the aspect of investment costs of 2000 to 3500 DM per installed kilowatt for a new
plants. Life extension spread out over 10 to 15 years amounts to approx. 50 DM per kilowatt;
according to U.S. data, on the contrary, up to 500 $ per kilowatt. Life extending activities in a
power plant require a period of time between two to five years. The dominating aspect is,
however, the problem of profitability and service life analysis. In order to fully solve this
problem, a decision must be taken in which the service life calculation based on condition ana
lysis of the total plant and the component load analysis provides an overview of the expected
service life of all components. The longer the general service life, the higher the rehabilitation
costs for the system will be. Besides the objective of obtaining a possibly identical service life
for all components, the extension of the operating time should be paralleled by retrofitting and
upgrading including improved performance, efficiency, and availability.
A major aspect for participation in the SP249 project is to diminish costs of maintenance.
A great advantage is, that the knowledge-based system will contain plant neutral and plant
specific information. Thus there is another possibility to improve the knowledge concerning the
plant condition. The actual 'as-is' state does not have to be known. The engineer will receive a
prognosis enabling him to choose a specific investigation method for the components. This
prognosis depends on the number and quality of incorporated case studies. Such recommen
dations allow to minimize the amount of costs. Recommendations may include e.g.
furnishing of a scaffold,
removal of insulation,
selection of specific non-destructive or destructive tests,
attachment of insulation,
prediction of the plant outage time.
In the USA, a survey by Stone & Webster Corporation has shown that incorporation of the
reliability-centered maintenance (RCM) functional analysis in the design phase can save a
major portion of the up-front implementation cost. It can also solve not only most of the
translation problems by doing the translation as part of the design process, but it can avoid
costly design errors and misapplications.
345
Still some years ago, in the conflict between prevention at all costs and operation until break
attempts were frequently made to avoid failures by early detection and elimination of incipient
damage. As a result of extensive expertise and of the possibilities provided by electronic data
processing we are now able to perform maintenance work in a more differentiating and
efficient way. There are three stages (Table 3):
*
*
*
Condition-oriented maintenance includes all system components affecting the plant availability
and/or safety whose failure involves the risk of subsequent damage. Maintenance is performed
according to the findings of the condition check, the prerequisite being one, or better, several
diagnostic methods (examination procedures) for such an approach and a component utilization of 80 - 95 %.
Condition-oriented maintenance is based on reliable early detection by way of process monitoring and recurrent inspection.
Systematic plant monitoring is the prerequisite for condition-oriented maintenance. By increasing the application of monitoring systems, i.e. in-process monitoring, maintenance can be optimized and plant availability increased. The adoption of electronic data processing-based maintenance models willfinallyresult in an integrated process control system.
Failure-dependent maintenance means waiting until failure occurs in components which are
not relevant for the plant safety and availability or which are redundant.
Advanced Assessment Route (Table 4)
In order to overcome of this dilemma - too many data, a huge quantity of urgently wanted information - a so-called 'road map' was developed within the SP249 system with the intention to
allow the user of the knowledge-based system to not only enter his specific problem, but to be
given - by the system - a reasonable choice between various remedial measures. The road map
- or advanced assessment route - allows the engineer to draw back on a collection of case
studies, look into component histories, recall pertinent standards and codes and will finally
recommend some solution, e.g. run or repair or shut down which the engineer then consider.
This route is therefore the link between the data base of the system that includes all pertinent
information in the form of engineering rules, models, codes, standards and the compilation of
events in the form of case studies and component histories.
WHAT HAS YET TO BE DONE?
Maintaining an update condition of the system by integrating always new case studies and link
them with the advanced assessment route to help making decisions.
Implementation of power plant component life assessment methodology using a knowledgebased system
There is no doubt that the implementation of the knowledge-based system of the SP249 project in the power plant will create problems because there is hardly any acceptance among
those that have to use it, because the software is not yet available in national languages and
because the required application knowledge is lacking. A basic knowledge of electronic data
processing and computer hardware should be available.
SP 249 KBS minimum user's profile - Table 5. This is an item which is frequently overlooked
when implementing the knowledge-based system in a power plant.
346
CONCLUSION
Without the many negative, and occasionally positive experience GKM would not have sponsored the development of a knowledge-based system and even actively participated. It is absolutely mandatory that the innumerable individual events - failures, accidents, upset conditions
etc. - which can no longer be handled in a conventional way be compiled in a way that allows
easy access and useful combination with accepted engineering rules, official standards and
codes.
The advanced assessment route provides a systematic approach to component life assessment.
In its present form it is applicable to high-temperature boiler components and pipe lines. All
stages of life assessment are covered from the initial plant prioritization through conventional
and advanced inspection techniques including defect assessment methods to the 'run/repair/shut
down' decision. The route is in the form of a series of logically connected flow charts identical
to those displayed on the screen by the knowledge-based system. The linkage with generic
guidelines for life assessment is made whenever appropriate.
347
EXHAUSTION
This is the consumed deformation capacity characteristic of a material. The degree of
exhaustion upon rupture is always 100 %. There is a non-linear relation between strain
and time. An almost linear relation is likely to be observed until the end of the
secondary creep range (= constant creep range).
CALCULATION OF EXHAUSTION
The calculation of the degree of exhaustion is based on operating parameters and the
German Pressure Vessel Code TRD 508. In this code the calculation procedure is
described by examples. The major calculation variable for the determination of power
plant component creep exhaustion is the creep strength of creep resistant steels
pursuant to DIN Standard 17175.
CREEP
Creep is the time-dependent, progressive, ductile deformation at constant (static) load.
This phenomenon is caused in metals by the mobility of the atoms increasing with
temperature and the behavior of the lattice defects. The thermally actuated change of
location controlled by diffusion is of critical importance for these processes.
CREEP DAMAGE
This is an irrevocable degradation of the microstracture occurring under the simultaneous impact of temperature and stress.
CREEP STRENGTH
Creep strength is that static stress which results in specimen failure at the end of a
specified load period. The high-temperature strength of metals is characterized
predominantly by the creep characteristics.
Figure 1
348
if
*ri
= 1
(D
(2)
\ tri Sri/
Ai)
(4)
*ri
Figure 2
(5)
3
4
+ 5
16
+ 220
+ 0.5
- 0.5
- 18
+ 20
0.5
+ 40
+ 500
+ 40
+ 500
temperature amplitude
+ 4
- 4
- 20
+ 30
Internal pressure
+ 10
+ 50
additional stresses
+ 5
wall temperature
+ 3
material parameter
16
Table 1
350
assessment
class
0
1
2a
2b
3a
3b
4
5
Maintenance Strategy
preventive, at intervalls
condition-oriented
component utilization 60 - 80 %
component utilization 80 - 95 %
redundancy necessar
e impact on availability
Impact on availability
no Impact on availability
- operation monitoring
- recurrent examinations
Table 3
replacement of parts
352
Phase (1-1)
General calculation assessment
|~~
Phase (1-2)
CONTINUE
Phase (1-4)
Assess creep / fatigue ufe fraction
with Phase 2
CONTINUE
with Phase 2
Phase (1-5)
Check operational factors
Operational factors OK
CONTINUE
with Phase 2
CONTINUE
with Phase (1-6)
353
/?^':-^:;:
kn&
A dvanced mod
Hypertext "read
only" use
o as In "Normal
mode" +
art
rid
(ter)
3d
::::
basic knowledge
of power plant
materials, standards
and similar (e.g.
which standards are
applicable for given
plants, etc.)
e ...
possibly as in
"Advanced
mode" (not
necessarily) +
how to operate
mouse
Softwarerelated
knowledge
Aanboring mode
Installation
o Install Guide
o install Kappa
o Install the
SP 249 System
o hardware
configuration
must be known
o screen
resolution and
drivers
knowledge
recommended
o operating
system (DOS/
Windows)
basic
knowledge
o ci oalliig and
eOfUng NOBBfHM
flies (m report
editor)
o creating, work
ing with bitmaps
(paintbrush),
drawings
o requirements for o none, If the case
study Is well pre
"Normal mode" +
pared on paper
ability to under
otherwise:
stand and Inter
prete calcu
o requirements for
lation results
"Advanced
mode" deep
knowledge of
CLA technology
(experienced
expert)
none
Table 5
355
Authors' index
357
P. Auerkari
VTT Manufacturing Technology
P.O. Box 1704 (Kemistintie 3, Espoo)
SF - 02044 VTT
Finland
Tel.:00 358 4 0 - 5 01 51 83
Fax: 00 358 - 0 - 456 - 7002
M. Behravesh
Electric Power Reasearch Institute
3412 Hillview Avenue
CA 94303 Palo Alto
U.S.A.
Tel: 001 (415)855-2388
Fax: 001 (415) 855-2774
J. M. Brear
ERA Technology Ltd.
Cleeve Road
KT22 7SA Leatherhead Surrey
United Kingdom
Tel: 00-441-372-374-151
Fax: 00-441-372-374496
B. Cane
ERA Technology Ltd.
Cleeve Road
KT 22 7SA Leatherhead Surrey
England
Tel.: 00 441 (0) 1372 367000
Fax: 00 441 (0)1372 367099
L. A. D. Correa
Petrleo Brasileiro S/A Petrobras/Repar
Rod. do Xisto KM 16
Arauoaria - Parana
Brazil
Tel.: 0055 41 8412541
Fax: 005541 8431244
D. V. Coury
Escola de Engenharia de So Carlos
Universidade de So Paulo
Av. Dr. Carlos Botelho, 1465 - Cp 359
13560-970 So Carlos
Brazil
Tel.: 0055 (162) 72 6222
Fax: 0055 (162) 74 9235
H. P. Ellingsen
MPA Stuttgart
University of Stuttgart, Pfaffenwaldring 32,
70569 Stuttgart
Germany
Tel.:0049-711 -2579361
S.Fukuda
Tokyo Metropolitan Institute of Technology
6-6 Asahigaoka, Hino
191 Tokyo
Japan
Tel: 00-81-4-2583-5111, ex. 266
Fax: 0081425835119
M. Gruden
Grupa R&D Consultancy Ltd
Vodnikova 8
SI 61000 Ljubljana
Slovenia
Tel.:+386 61 55 33 70
Fax:+386 61 55 17 74
A. S. Jovanovic
MPA Stuttgart
University of Stuttgart, Pfaffenwaldring 32,
70569 Stuttgart
Germany
Tel.:00-49-711-685-3007
Fax: 685-2635
H. R. Kautz
Grokraftwerk Mannheim AG
Postfach 24 02 64
68172 Mannheim
Germany
Tel.: 00-49-621-868-3702 or 3703
Fax:00-49-621-868-3710
M. C. Klinguelfus
COPEL - Companhia Paranaense de Energia
(LAC/CNAT)
Rue Coronel Dulcidlo 800 Curitiba - Parana
Brazil
Tel.: 0055 (41) 366-2020
Fax: 55 (41) 266-3582
J. A. B. Montevechi
Escola Federal de Engenharia de Itajub
CX. Postal 50
CEP 37500 - 000 Itajub (MG)
Brazil
Tel.: 0055 (35) 6291212
Fax: 0055 (35) 6291148
H. H. Over
Institute of Advanced Materials, JRC Petten
of the European Commission,
Netherlands
Tel.:00-31-2246-5256
Fax: 003122463424
Dora de Castro Rubio Poli
Instituto de Pesquisas Energticas e Nuclares
(IPEN - CNEN/SP)
Traverssa R 400 - Cidade Universitaria / CX.
Postal 11049- 5508 - 900 So Paulo
Brazil
Tel.: 0055 (11) 8169281
Fax: 0055 (11) 8169186
M. Poloni
MPA Stuttgart
University of Stuttgart, Pfaffenwaldring 32,
70569 Stuttgart
Germany
Tel.: 0049 711 685 2040
Fax:
358
S. Psomas
MPA Stuttgart
University of Stuttgart, Pfaffenwaldring 32,
70569 Stuttgart
Germany
Tel.: 632994
G. M. Ribeiro
Companhia Energetica de Minas GeraisCEMIG
Av. Barbacena 1200 - Belo Horizonte 30161-970-MG
Brazil
T. Sato
Nuclear Plant Design and Engineering Dept.
Toshiba Corporation
Yokohama,
Japan
Tel.: 0055(247) 621 135
R. D. Townsend
ERA Technology Ltd.
Cleeve Road
KT22 7SA Leatherhead Surrey
United Kingdom
B. R. Upadhyaya
The University of Tennesse, Knoxville, TN,
USA
Tel.:00-1-615-974-5048
R. Weber
MIT GmbH
Aachen
Germany
Tel.: 02203/300573
S. Yoshimura
Department of Quantum Engineering and
Systems Science,
The University
of Tokyo, 7-3-1 Hongo, Bunkyo
Tokyoll3
Japan
Tel.: 00-81-474-74-1070
Fax:00-81-474-74-1070
SMiRT 13
-)
SMiRT 13
4
RT13
f
SMiRT 13
SMiRT 13
Post Conference
Seminar Nr. 13
Sao Paulo, Brazil
August 21 -23, 1993
Proceedings
Post Conference
Seminar Nr. 13
Sao Paulo, Brazil
August 21 -23,1993
Proceedings
IN P O W E R
PLANT,
PROCESS
PLANT
A P P L I C A T I O N S
U N T E L I .
S O F T W A R E .
Post Conference
Seminar Nr. 13
Sao Paulo, Brazil
August 21 -23, 1993
Proceedings
lOETWARI
APPLICATIONS or- IMTELUGEI
STEWI.S IN POWER PLANT, PROCESS PLANT
Post Conference
Seminar Nr. 13
Sao Paulo, Brazil
August 21 -23,1993
Proceedings
Post Conference
Seminar Nr. 13
Sao Paulo, Brazil
August 21 -23, 1993
Proceedings
A N D
S T R I
I C T U R A
I E N G I N E E R I N G
P L I C A T I O N S
SYSTEMS
O F
ENGINEERING
INTE,
N P O W E R
F I G E N T
PLANT,
S O F T W A R E
PROCESS
PL
CLNA17669ENC