Sunteți pe pagina 1din 376

Process Engineering

Training Program
MODULE 6
Process Control in the Cement Industry
Section Content
1 Process Control
2 Some Practical Experience with an Expert Kiln Control System
3 CE Refresher Articles
4 New Concept for Cement Plant Control
5 Modernization of Control Systems in Cement Plants
6 Basic Concepts for Feedback Control
7 Selective Control Systems
8 Proportional Plus Integral Control
9 Integral Windup and the Batch Switch
10 LINKman Computer Based Kiln Control
11 Computer Based Kiln Control – LINKman

HBM Process Engineering Conference

Neural Net Control Systems


The Real Cost of Kiln Fuels

Presentations
Process Control Presentation – Joe Stratton
Kiln Control Systems
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 1

Process Control
1. INTRODUCTION

In the pursuit of lower manufacturing costs cement plants have become more costly and sophisticated; in
particular the importance of conserving energy is paramount. It is not surprising, therefore, that
instrumentation and control technology has attained a position of great significance in cement factories.

The purpose of this paper is to point out the many possibilities of applying this technology to advantage when
designing new plants or modernising existing ones.

Although the scope of the paper precludes design details, many examples of existing installations are given in
diagrammatic form.

2. OBJECTIVES

The main objective must be the production of cement of required quality for the lowest cost. This objective
may be sectionalised as follows

2.1 THROUGHPUT

Throughput should be maintained as closely as possible to the target level as any shortfall will increase unit
costs.

2.2 QUALITY

Quality should be maintained as closely as possible to the target; the risk lies in the tendency to exceed the
quality target in order to avoid the product falling below standard. Exceeding the quality target entails higher
energy consumption and thus an increase in manufacturing costs.

2.3 COST

Cost of production is the aggregate of costs including raw materials; energy; labor; plant maintenance etc.
each of which should be kept to a minimum commensurate with achieving the targets of quantity and quality.

2.4 PROTECTION

A further objective is protection of plant, personnel and environment.

3. OPERATING CRITERIA

The cement making process embodies a series of unit operations from the winning of raw materials to the
despatch of cement. Unit operations comprise milling, blending, burning and the transportation of materials.
Each operation presents a set of conditions which must be met if the objectives are to be achieved. These
conditions are manifest by physical measurements such as flow, pressure, temperature, weight, density,
viscosity, chemical and physical analysis etc.

Methods of measurement and control application to these operating criteria are readily available and it is the
job of the Process Control Engineer to select suitable instrumentation and to design control systems capable of
achieving the stated objectives.

We will now endeavour to show how the judicious application of process control instrumentation can improve
the cement manufacturing process in all departments.

3.1 MILLING

The main objective in any milling process is to maintain a consistent product at the lowest possible cost.

3.1.1. FEEDRATE

Within constraints imposed by the design of the milling system and the physical properties of the materials
being ground the unit cost of production is directly dependent upon throughput. It is essential, therefore, to
maintain the mill feed rate at a maximum level consistent with a product of the required quality.

Referring to Fig. 26.1 the ultimate constraint on feed rate, i.e. the target, is represented by line (a) and the
objective is to maintain the actual feed rate as close to this as possible.

Variations about a mean will be present in any feed rate and the magnitude of these variations determines how
close an approach can be made to the target without risk of overshooting. Line (b) represents the actual feed
rate both with and without control and the gain resulting from control is clearly illustrated.

Increased mill throughput affects the overall performance in several ways e.g. lower kW/Ton; shorter running
time to produce required amount of product (this is an important factor in cement milling as it enables power
to be used during off peak periods at reduced cost).

The magnitude of feed variations also affects the power required to produce a given fineness as illustrated in
Fig. 26.2. This shows the non-linear relationship existing between power consumed and surface area of
cement mill product.

Fig. 26.3 depicts a typical example of the method used to ensure a controlled feed rate to a grinding mill.
3.1.2 OTHER CONSIDERATIONS

In the case of the wet process, raw materials are ground with water to make a slurry and it is essential to
restrict the addition of water to the minimum required for grinding, mixing and transportation. Density is a
convenient parameter directly related to slurry moisture and density of the mill product can be used to regulate
the water input to the mill.

Very often the limiting factor is slurry viscosity and it may be advantageous to use deflocculating agents in
order to keep the slurry moisture down. Fig. 26.8 shows a typical slurry preparation system incorporating
additive control.

The preparation of raw meal for the dry process entails drying and this is effected by passing hot air through
the mill. Kiln exit gases may be used for this purpose but it may be necessary to employ hot air furnaces and in
that case it is important to avoid over drying with consequent wastage of fuel. Considerable economies can be
achieved by measuring the moisture in the mill product, automatically regulating the fuel input to the furnace.

In the case of cement milling it is important to avoid the production of cement with false setting characteristics
due to gypsum decomposition arising from high milling temperatures. There is an increasing use of internal
water sprays in this connection and Fig. 26.4 shows control of water at the mill inlet from the diaphragm
temperatures and the mill outlet from cement temperature. Fig. 26.4 also shows the various measurement
points on a closed circuit cement mill.

3.1.3 PLANT PROTECTION

Many other factors influence the cost of production by virtue of their effect on the availability and optimum
use of plant e.g. mill and gearbox bearing temperatures; cooling water flow rates; motor winding
temperatures; excessive vibration; mill blockage etc. All these factors may be classified as plant protection
requirements and they should be covered by suitable monitoring and alarm provisions.

3.2 BLENDING

The correct blending of materials is, of course, fundamental in the cement manufacturing process and the
objective is to produce the right mixture for the lowest cost.

3.2.1 RAW MILL FEED

The blending of raw materials usually starts at the point of extraction and the method used will depend upon
the type of process and the nature of the materials.

Fig. 26.5 shows the blending of soft chalk with clay in a washmill and it will be seen that the chalk feed into
the washmill is weight controlled and clay tipping is regulated by the chalk weight signal. Water is volume
controlled and automatically regulated by the chalk weight signal.
Fig. 26.6 shows the blending of hard chalk or limestone with clay in a tube mill and it will be seen that clay
slurry is brought to a constant moisture by the controlled addition of water. This enables a more accurate blend
of stone and clay to be made.

Fig. 26.7 shows the blending of limestone with low silica and high silica shales. Each material is weight
controlled into the mill with shale in a preselected (adjustable) ratio to limestone. The associated
instrumentation allows the total mill feed to be altered without upsetting the ratio of materials. Also there is an
independent adjustment of limestone to total shale and independent adjustment of high and low silica shale
ratio.

3.2.2 KILN FEED

Further blending usually takes place after milling and this may involve mixing batches of slurries or ground
materials. The correct batching may be based upon either volumetric displacement using continuous level
detection or weight using electric load cells. In this connection it may be of interest to note that the Blue Circle
Cement's Cauldon Works has a raw meal blending system comprising two 750 ton capacity tanks on load
cells.

3.2.3 CEMENT MILL FEED

A further example of blending is the controlled addition of gypsum to clinker in the cement milling process
and Fig. 26.9 shows a typical arrangement which is designed to maintain any preselected ratio of one material
to the other.

So far we have referred to various methods by which the blending of materials can be achieved with the aid of
process control instrumentation and it will be recognized that correct proportioning is obtained by adjustments
to the controller set points. These adjustments are based upon chemical analysis of the product which may be
carried out periodically in the conventional manner or continuously by means of an X-ray analyzer and
associated sampling equipment.

3.3 KILN OPERATION

Major factors influencing kiln performance are - variations in kiln feed (both quantity and quality), heat input,
kiln gas flow, secondary air temperature, flow of material through the kiln.

It used to be common practice to allow these variations to take place and leave it for the kiln operator to take
care of the resulting problems in the kiln burning zone. This he did by adjusting one or more of the "Wild"
variables referred to thereby increasing the probability of further problems.

Most of these variations can be eliminated at source and kiln running conditions will be much more stable as a
result. Indeed it is axiomatic that the elimination of variations is a prerequisite to any further, more
sophisticated control.

The benefits to be gained by increased stability lie in higher potential output with correspondingly lower unit
cost, more efficient use of fuel, longer refractory life and a more consistent product.

The major variables will now be considered in detail.


3.3.1 KILN FEEDRATE

Kiln feed arrangements depend upon the type of process i.e. wet, dry or semi-dry and control is based upon
volumetric or gravimetric measurements. Bucket or spoon feeders were in general use for slurry feeds until
fairly recently but these have now been superseded by the magnetic flowmeter.

This meter is easily inserted into the delivery pipe and is capable of directly regulating the pump speed thus
saving the considerable capital cost and upkeep of the bucket feeder. Fig. 26.10 shows a typical installation
equipped with checking facilities.

The feed of powdered raw meal is weight controlled in the case of dry process kilns and it should be noted that
suspension preheaters demand a much higher short term accuracy than conventional weighfeeders are capable
of giving. In this case it is most advantageous to employ the so called 'Loss-in-Weight' system and a typical
arrangement is shown in Fig. 26.11. This system is based upon a batch weighing principle and is capable of
maintaining the feed within ± 0.2% minute to minute.

The Lepol kiln feed comprises two separate stages -

a) the preparation of nodules by the controlled addition of water to a constant weight of powdered raw meal
and

b) the controlled rate of extraction from the nodule hopper by the Lepol grate.

Fig. 26.12 shows a conventional belt weigher regulating the extraction of raw meal from storage and also, via
a ratio controller, the amount of added water. The constant weight of nodules thus formed must be exactly
matched by the Lepol grate extraction rate in order to avoid over filling or emptying the nodule hopper; the
former eventuality would cause a nuisance and the latter would allow serious inleak of air to take place. A
constant level is maintained in the nodule hopper by automatic regulation of the grate speed.

3.3.2 FUEL FEED

Whether a kiln is fired by coal, oil or gas the feed rate should remain constant unless purposefully altered by
the operator. Feedrate measurement and control is a simple matter in the case of oil or gas but more difficult in
the case of coal.

Direct firing arrangements using low retention mills working under suction present little difficulty as the raw
coal feed rate may be measured and controlled by means of a conventional belt weigher; Fig. 26.13 shows a
typical arrangement.

Direct firing coal mills working under pressure conditions demand a sealed weighing system in order to
prevent egress of coal dust laden air. The 'loss-in-weight' system referred to in 3.3.1 is satisfactory under these
circumstances and enables precise adjustments to be made to the coal feed.

Indirect coal firing arrangements entail the control of pulverised coal feed rate to the firing pipe and similar
considerations apply with regard to the type of weighing system adopted. Fig. 26.14 shows weight control of
pulverised coal introduced at the pressure side of the firing fan.
3.3.3 KILN GAS FLOW

The air drawn into the front end of a kiln by the induced draught fan serves two purposes - a) to enable the
combustion process to take place and b) to transfer heat from the burning zone to other parts of the kiln.
Unwanted variations in air flow occur when the kiln restriction alters due to ring formations etc. and this has a
disturbing effect on kiln performance. Although the benefits are self evident there is no satisfactory method of
directly measuring this air flow.

However, with stable fuel feed conditions any air flow variations will be detected as oxygen variations in the
kiln exit gases. If the exit oxygen is kept constant by automatic regulation of the induced draught fan speed a
stable air flow will result. Adjustments to the pattern of heat transfer along the kiln may be brought about by
increasing or decreasing the oxygen control set point.

3.3.4 SECONDARY AIR TEMPERATURE

Air entering the kiln carries the heat recovered in the clinker cooler and direct fuel savings and stable kiln
conditions are brought about by stabilizing the temperature of this air at as high a value as possible.

The temperature of air leaving the clinker cooler is related to its volume and in the case of rotary or planetary
coolers this volume is fixed by the kiln requirements. Apart from ensuring a stable air flow as described in
3.3.3 little can be done to offset the effects of variations in the amount of clinker entering the cooler.

With grate coolers the cooling air volume is in excess of kiln requirements and its distribution is adjustable.
Referring to Fig. 26.15 it will be seen that the volume of air directed to the first undergrate chamber is kept
constant thus enabling the pressure in this chamber to be taken as a measure of bed permeability. Any
variations in the amount of clinker entering the cooler tend to alter the bed permeability and this is corrected
by automatic regulation of the grate speed.

3.3.5 KILN DRIVE POWER

The flow of materials through a kiln has a great effect on the stability of operation and, unfortunately, 'there is
no practical way of monitoring this parameter let alone controlling it. However, it has been found that the
power consumed by the kiln drive motor reflects to some extent the pattern of movement within the kiln. In
this connection a record of kiln driving motor power is usually provided.

3.3.6 BURNING ZONE TEMPERATURE

It is possible to obtain a useful measurement using the so called 'two color' radiation pyrometer but it is
subject to interference from fine particulate suspensions and movement of the burning zone.
3.3.7 PLANT PROTECTION

Many other factors influence the cost of production b virtue of their y effect on the availability and optimum
use of plant e.g. kiln shell temperatures, cooler grate plate temperatures, bearing temperatures, presence of
explosive gases in coal mills and electrostatic precipitators (CO monitors).

All these parameters are readily measured and it is usual to provide such instrumentation including any
necessary alarm and plant shut-down facilities.

The importance of providing continuous monitoring and alarm facilities will be evident when, for instance,
the protection afforded by kiln shell temperature is considered. Knowledge of the kiln shell temperature
profile enables the operator to avoid costly shut down due to premature failure of refractory linings.

3.3.8 ENVIRONMENTAL PROTECTION

The most serious potential hazard is dust emission into the atmosphere and it is now becoming common
practice to install continuous monitoring equipment on kiln and mill effluents.

Spillage of materials from silos and transporting systems can create a considerable nuisance and this can be
avoided by installing level devices capable of shutting down plant where necessary.

3.3.9 SAFETY

Dry process plants present a serious risk of explosion in preheaters and electrostatic precipitators due to the
accumulation of carbon monoxide. Continuous monitoring of CO concentration in kiln and preheater exit
gases is therefore essential and provisions must be made to automatically shut off the fuel supply to the kiln
and the high tension supply to the precipitators if CO concentration exceeds the set limit.

Where grate type coolers are installed there is a need to balance the. air supplied from the cooler with that
required by the kiln in order to avoid dangerous pressure conditions in the kiln hood. This balance is achieved
by automatic regulation of the cooler exhaust damper as shown in Fig. 26.15.

Another important aspect in the avoidance of explosions, blowbacks etc. is the necessity to ensure that all
plant regulators such as dampers operate in a fail-safe mode and it should be noted that 'safe' in respect of
plant protection may in some cases be quite the reverse in respect of personnel protection.

4 MATERIAL TRANSPORT

Materials are transported through the various stages of manufacture from the quarry to cement despatch in
many ways e.g. belts, screws, elevators, pneumatic conveyors, pipelines etc. Factors to be considered are:-

a) that material is actually flowing


b) that the transport system is fully utilized without spillage
c) that intermediate storage capacity is effectively utilized without spillage
Instrumentation is readily available to cover requirements a and b and selection will depend upon the
circumstances e.g. belt weighers, screw level detectors, elevator power consumption, slurry flowmeters etc.
Closed circuit television may also be used with advantage to avoid spillage of materials especially at transfer
points in the system.

The effective utilization of storage capacity demands a knowledge of the contents of silos, hoppers etc. and
again selection of suitable equipment will depend upon the circumstances. The contents of steel silos and
hoppers may best be obtained by weight whereas the contents of concrete silos are usually based upon a level
measurement.

It is also advantageous to have prior warning of impediments to flow and in this connection various devices
have been developed to indicate blockage in mill inlets, preheater cyclones etc.

The despatch of cement is costly and it is important to ensure that transport vehicles are correctly loaded as
quickly as possible. Various methods based upon level or weight have been developed to suit particular
circumstances.

5. CENTRALISED CONTROL

In the introduction it was pointed out that plants have become more sophisticated in the pursuit of lower
manufacturing costs; this inevitably calls for a corresponding degree of sophistication in the control of the
process.

Apart from the aspects of measurement and control already referred to there is a need to bring together in one
location all the means of starting, stopping and operating the plant; this entails collection and presentation in
the most suitable form of all the information required. Closed circuit television is widely used for the
observation of kiln burning conditions, conveyor transfer points etc. and this allows the control room to be
situated wherever required. It can be stated that the concept of centralized control is one of the major
contributions towards efficient modern cement plants made possible by process instrumentation.

5.1 VISUAL PRESENTATION

Information from all parts of the plant may be displayed in the central control room by means of closed circuit
television and analogue or digital representation; that which is required for the continuous operation of plant is
best presented in pictorial and analogue forms whereas historical information may be digital.

There are two distinctive approaches :-

1. All information in whatever form is displayed simultaneously and continuously.

2. Information is displayed on demand.

The first approach has the advantage of immediate availability and comparability of information but the
disadvantage of large space requirements.

The second approach to some extent sacrifices availability and comparability to achieve compactness.
Plant protection information such as high bearing temperature, oil flow failure etc. is presented on
annunciators which afford both visual and audible alarm facilities. Certain items in this category may also be
logged on print-out devices.

6. COMPUTER CONTROL

A great deal has been said about the role of computers in the control of cement plants and many installations
have, indeed, been made. Early installations were extremely costly employing large high capacity computers
capable of handling commercial matters as well as process control These were superseded by smaller
computers, of limited capacity, designed to handle process control only. Such installations generally employ
the computer to perform tasks falling into one or more of the following categories-

a) sequence starting and stopping of plant


b) provision of alarms
c) control of process parameters by the solution of control equations

Categories a) and b) do not involve calculations and may be adequately covered by conventional, less costly
systems; e.g. sequence starting and stopping by programmable controllers and alarms by annunciators.

Experience has shown that the most successful area of application in category c) is in the control of raw
material blending which is outside the scope of conventional analogue controllers. In such applications the
computer, operating in conjunction with continuous X-ray analysis equipment, is able to regulate the flow of
each material to produce the correct blend at the least cost.

The majority of process control requirements in category c) can be met just as well and at lower cost by
conventional analogue controllers. A serious disadvantage of computer control is that all the plant controls
cease to function simultaneously in the event of computer failure and the consequences could be serious. For
this reason it is necessary to install a duplicate stand-by computer or to retain analogue control backup
facilities at considerably increased capital outlay.

Problems arising from total dependency have been removed by the advent of micro-processors which may
well be dedicated to individual control loops and small areas of plant and provide a more versatile and less
costly approach to centralized plant control.

In this respect the Company is actively considering the advisability of installing a micro-processor based
centralized display system on a U.K. Works.

Color monitors would replace conventional indicators, recorders and mimic diagrams; associated with the
display would be an operators console incorporating all the controls necessary for running but not for starting
and stopping plant.

The system would in effect be a substitute for the existing conventional control panel and would perform all
the same tasks in a different manner. The essential difference is that with the new system information and
control facilities concerning each section of plant are called up when required whereas with the conventional
system all the information and control facilities are permanently displayed.
The equipment associated with this system is capable of providing a greater degree of control sophistication
than conventional controllers and, this can be further extended by the addition of a computer when
requirements have been defined.
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 2

Some Practical Experience With an


Expert Kiln Control System
SOME PRACTICAL EXPERIENCE WITH A

AN EXPERT KILN CONTROL SYSTEM

SUMMARY

In 1982 Blue Circle took a firm decision to commit resources to solving the problems associated with the
application of an Expert System.

These problems were overcome and some 60% of Blue Circle Industries U.K. clinker is produced with the aid
of an expert system.

The system has been successfully applied to the wet process filter cake process, dry process and Lepol kilns,
and these kilns are achieving better benefits than predicted at the time of resource commitment.

The benefits have been identified as primarily stemming from a more stable kiln operation produced by the
constant monitoring and consequently earlier, smaller adjustments to the kiln control parameters.

Whilst the emphasis on each aspect varies with the type of kiln each has shown fuel and refractory savings
together with potential to increase clinker production. The less variable generally softer clinker leads to
cement mill power savings and aids the production of less variable cement which is of considerable value in a
competitive market.

The technology developed is not confined to the cement industry, and has already been installed on a
Lubricating Oil Plant and a Glass Manufacturing Plant. This paper, however, is confined to the experience on
cement kilns within Blue Circle.

This paper outlines the development, highlights the manner in which obstacles have been overcome and
quantifies the practical benefits obtained by adoption of an expert kiln control system. Development of this
expert system would not have been possible without the full co-operation in a joint venture of SIRA who now
market the fully developed system through their subsidiary IMAGE AUTOMATION under the trade name
LINKMAN.

INTRODUCTION AND BACKGROUND

Before 1982 Blue Circle, in common with many other industries had spent much effort on crying to produce a
mathematical model of a kiln in order to bring the clinker producing process under computer control. It was
generally accepted chat human operators would naturally err on the side of caution when controlling a kiln and
that as a result the kiln would be operated at higher temperatures than strictly necessary in order to provide a
'heat reservoir' to deal with any pertubations. The goal therefore was to produce an effective model which
would facilitate automatic computer control leading to lower temperatures, providing benefits of fuel savings,
extra throughput, reduced refractory wear and softer more stable clinker.

The mathematical model was however illusive and whilst individual discrete loops were applied, such as Kiln
Back End Oxygen controlling coal, Back End Temperature controlling Back End Dampers, Kiln feed and kiln
speed linked co a pre-set ratio, the inability of any mix of these to offer a complete solution meant that they
were sporadically and incompletely introduced through the companies works.
In 1982 Blue Circle carried out a full review of the clinker making process to establish the potential benefits of
achieving effective automatic kiln control, and to identify the best method of pursuing these potential benefits
if indeed a "best method" existed.

By comparing "best achieved performance" of its kilns with the "actual normal" performance and by assessing
the alternative methods of control available to diminish the difference between these two performances Blue
Circle identified the following relevant facts:

1. The potential savings were sufficiently large to justify a substantially increased resource allocation to
the purpose.

2. The system most likely to improve kiln control to the level desired would be an on line expert
system utilising a rule based control strategy.

3. Since a fully suitable system was not currently available then Blue Circle must perforce develop its
own.

4. Because of the energy saving potential, financial support could be, and subsequently was obtained
from the U.K. Department of Energy.

THE CEMENT MAKING PROCESS

(Figure 1) shows a typical dry process kiln and the only requirement would seem to be to apply a constant feed
rate of constant composition raw meal into the back end, burn a constant rate of constant composition coal in
the front end, draw sufficient air for combustion through the kiln rotating at a fixed speed and the kiln will
make a constant rate of good clinker.

Sadly this often proves not to be possible and Appendix I illustrates the many input variables which can cause
a deviation from this ideal.

in practice few of these input parameters can be maintained constantly at the desired level and variation is
often such as to cause a very unstable kiln. Skilled operators can respond to control this instability by
operating the few controls available to them viz:

1. Alter the raw meal feed rate - via feed rate control (belt weigher, etc).

2. Alter the Coal Feed Rate - via speed of a volumetric feeder on weigher. (volume control).

3. Alter the kiln speed - via the kiln drive motor variable speed control.

4. Alter the Airflow to the kiln - by adjusting the Back End Damper.

5. Alter the amount of Precipitator Dust being fed back to kiln - via a feeder installed for this purpose.
A good kiln operator can often stabilize a kiln by carrying out several adjustments at the same time and indeed
when the operator is fresh, highly motivated to succeed and free from other diversions, he can often make a
very good job of maintaining a stable kiln even when many events are combining to prevent this.

Unfortunately the human operator cannot be as fresh at the end of an eight hour shift as at the beginning and
there are many conflicting duties such as report form completion which draw his concentration from the kiln
at inappropriate moments.

This then was the background to the adoption of an expert system the intention being to encapsulate the best
performance in the form of a set of rules which would mimic the operator's ideal response to any particular set
of circumstances.

THE BLUE CIRCLE "EXPERT" KILN CONTROL SYSTEM

The Blue Circle "Expert" kiln control system is comprised of equipment (Figure 2) which:-

1. Collects and validates the data which an expert kiln burner acquires in order to judge what action he should
take.

2. Subjects this data to a set of rules previously defined in simple English by the expert runner (with help
from technologists).

3. Adjusts one or more of the kiln controls.

4. Make visible to the current kiln burner at all times the input data, relevant rules, proposed and implemented
adjustments to the kiln controls.

Input Data Required

in practice only a few parameters are essential to the basic rule blocks though, in practice, these become
increased as the control strategy is sophisticated by the operational Works. Initially the following would be
considered essential:(See Appendix 2)

- Kiln Exit NOX


- Kiln Exit O2
- Kiln Exit CO
- Back-End Temperature
- Kiln Amps
- Kiln Speed
- Kiln Feed Rate
- Fuel Feed Rate
- Damper Position or Fan Speed

Other measurements which are later used in an optimizing manner include Feed LSF and Clinker Free Lime.
On most kilns we have only 4 independent control parameters and these, with an indication of their main
effects, are shown in (Figure No. 3). Collecting the Data

Blue Circle have used two methods of collecting data and sending control signals to final elements. One is to
use a simple, dedicated signal multiplexor to which all field signals are fed in a standard form (usually
4-2OmA).

The other method is to communicate using a suitable protocol with standard panel instruments and data
acquisition units via the data highway normally interrogated by the instrument manufacturers central display
system. A choice is dictated by what instrumentation is already available at the Works under consideration.
Both types are illustrated in (Figure 2).

Applying the Rules

Much has been written on the development of L.A. Zadeh's original work on fuzzy logic and it is not the
purpose of this paper to pursue this topic. it is acknowledged, however, that this work was fundamental to the
development of LINKman which operates using menu driven rules of the type:-

Rule 1 If BZT is 'high' and OXY 'low' then reduce coal by 'small' amount

Rule 2 If BZT is 'high' and OXY 'OK' then increase feed by 'small' amount and open damper a 'small'
amount

Rule 3 If BZT is 'high' and OXY 'low' then open damper by a 'medium' amount

The definitions of 'high', 'low' and 'medium' need careful consideration and will often change during
commissioning. All rules are scanned for the 'degree of fulfillment' and merged to provide a 'proposed' change
in feed, coal, rate, damper, kiln speed, etc.

The prime aim of the expert kiln control system is to keep BZT, O2 and BET to optimum values as illustrated
in (Figure 4).

Implementing the Control Adjustment

The Blue Circle expert controller is normally commissioned by displaying the 'proposed' adjustment for the
operator's consideration. When he has developed sufficient confidence in the equipment’s decision making he
will push the 'computer in control' button and the adjustments will automatically be implemented from then on
until he resumes control by pressing a 'computer off button'.

The computers intended action is recalculated every minute and displayed on the screen so that the burner can,
at any time, check that its 'intentions are honorable. If it were proposing drastic action of which he strongly
disapproved he can instantly resume control of the process.
The Key to Expert System Success within Blue Circle

1. The strategy (see Figure 5) is very visible and can clearly be seen to mimic the manual actions of the kiln
operators - they like this.

2. The 'autopilot' label with which the system has been 'marketed' has enabled the operators to accept it as a
'tool' rather than as a 'threat' - they are always in charge.

3. The systems constant vigilance and 'anticipatory' small increments of adjustment leads to less deviation
than under manual control -most burners will happily acknowledge that "as long as things are normal, the
system controls the kiln better than I can - of course I will always be needed when major upsets occur and
for starting and stopping the plant".

4. The system will switch between different rule blocks, self-check its own and the instrumentation’s
integrity, implement boolean logic and accept additional data from laboratory and operator from which
optimizing steps can be taken, e.g. feed forward the effect of LSF change.

5. The system, in practice, mimics not an individual operator but a consensus of operators, management and
technologists. Once set up it cannot operate in an inconsistent manner as an operator with a headache
could.

6. Strategy development using this system is simple, incremental and fast. The previous 'best' strategy for a
kiln of the same type is used as a starting point.

7. The shell program developed by Sira provides for rules, definitions of high, low, etc to be fed in by
response to a menu by support staff with little in the way of computer programming skills. Some basic
programming language is helpful for full system development.

LESSONS LEARNT DURING THE EXPERT SYSTEM DEVELOPMENT

1. CORRECT MOTIVATION OF WORK FORCE

Initial reactions to the proposal to install "Computer Control" varies but if handled unsympathetically at the
outset, we learned chat managers could view it is a scapegoat, operators could fear it as a threat to job security
and satisfaction, and maintenance staff could feel that an unfair burden was being placed on them.

The solution to this in Blue Circle has been to have full presentation and open discussion with works' staff
well in advance of a proposed installation, to encourage inter-works visits where their opposite number will
often sell the project to them in a totally convincing way and finally to identify a system "champion" and
"deputy" from existing works' staff who will nurse the project through to fruition. Blue Circle always stress
the autopilot nature of the expert system and never let it be forgotten that the human operator must always be
prepared to judge the units performance and overrule it in extreme circumstances.

The underlying justification must always be that the works on which the unit is being installed are convinced
that it can, and committed to ensuring that it will, help them produce more of a better quality product at lower
cost.
It has been found that in order to give the champion and his deputy the confidence and competance to properly
promulgate the project it is necessary to provide one week's off site training on the system and its justification.
This takes the form of a hands-on session and whilst computer numerate people take co this extremely easily,
no particular problems have occurred with people initially without this skill. Considerable effort has gone into
the aspect of user friendliness and menu driving to deskill this activity as far as possible.

2. DRAWING OUT THE LOCAL "SECRETS."

It is commonly accepted within the Cement industry that no two kilns behave in an identical way. Blue Circle
have found it vital to involve the operator at an early stage to draw out from them the particular variances of
behavior of their kiln. This is normally done using pre-prepared forms in an informal setting and the special
knowledge is captured in parallel with the basic rules and can then be inserted into the expert systems control
strategy by the project engineer.

The on site presentations and training are targeted toward management getting an overview of the project and
developing an understanding of how it will affect people on site, whilst ensuring that operators become
conversant with the keyboard and system menus.

It is generally found at this time that the "better" operators tend to be very supportive and welcome the
addition of tool to their tool box, but the poorer ones can easily become confused and see the system as a
threat if insufficient time is given to resolving their doubts.

3. RETROFITTING THE SYSTEM TO AIN EXISTING KILN

Generally speaking, the more modern the kiln instrumentation the easier and cheaper it is to install an expert
system. We have developed two alternate systems for data collection and dissemination (control outputs). Our
preferred approach is to access a modern control display system or advanced instruments direct onto the
highway via a suitable protocol converter. The other method is to bring all signals to and from a purpose
designed interface unit in the form of standard signal levels (e.g. 4-20 M.A. 0-10v, 5-10v etc.).

On some older installations using a predominance of pneumatic instrumentation we have found that the cost
of adapting the existing instrumentation to provide the standard electrical in/out signals has rivaled the cost of
providing the expert system. We have also needed to expand the size of control rooms on several occasions to
accept the additional equipment.

Regardless of which method is adopted the actual setting up of the system can be done with very little
interference with the normal works operation. The only people involved at this stage are those responsible for
connecting the wiring and configuring the system database.

4. A PARTICULAR VITAL LESSON

One vital difference between human operation and expert system operation has proved to be the size of the
control increment applied. Whilst the operator generally waits until sufficient deviation from the norm has
occurred to justify a substantial move on the final control elevient, the success of the expert system is based on
its sensing the need for adjustment at an earlier time and consequently making a smaller adjustment to the
controls.
The significance of this is that quite often an amount of hysteresis in the control chain that may be quite
acceptable to the human operator (though he may have preferred an improvement) becomes totally
unacceptable to the expert system since it may need several corrective increments before overcoming the
hysteresis backlash and actually making an adjustment.

This has led to our universally adopting provision of a dedicated feedback loop where this does not previously
exist, for any control parameter which we wish to adjust. A particularly fine example of this is kiln speed
control (where a normal pony motor driven speed adjustment may have a typical backlash of 2% of speed,
whereas a typical increment of speed applied by the expert control system can be less than 0.5% of speed.

5. DEVELOPMENT OF HIGH LEVEL CONTROL STRATEGY

When the wiring is complete the additional instrumentation installed and the system database configured, then
the high level control strategy must be developed. This is normally done by providing 24-hour cover which in
addition to speeding and easing the strategy development has proved extremely useful in gaining the goodwill
of the shift personnel without whose co-operation the project is doomed to failure.

The method adopted is to test the simplest strategies possible and only provide further development and
enhancement when it becomes unavoidable. The simpler the strategy the more easily this will be understood
and supported by works' staff.

It is at this stage that the project is in most danger since the system needs to be set up to control several
variables at a time and conflict can often arise as to whether the system action (or inaction) is soundly based.

This is particularly the case if substantial deviations or cycling occurs, and in rapid strategy development this
will often be the case. At this time the choice of "champion" is seen to be vital and he needs to have the total
respect and confidence of the management since it will generally be he who soothes their fears.

It is not unknown for senior managers to become seriously concerned at this time and they too must be
convinced that short term loss (of stability) is in order to produce long term gain.

The operators are normally more sanguine at this stage because they have generally seen the kiln suffering
much greater deviations. They can see-that the changes implemented by the strategy are normally similar if
smaller and more frequent to the one they themselves would make.

The interim strategy should be operating within a few days after which it is a case of painstaking
improvements implemented usually after several cycles of control.
6. BENEFITS PREDICTED

The study in 1982 showed that operating kilns in a stable manner and consequently at a lower burning
temperature would offer potential overall savings of:

1. Direct Fuel Savings of 2%

2. Kiln Refractory Savings of 20%

3. Cement Milling Energy Savings of 10%

4. Increased Kiln Production by 5%

The cement mill energy savings would accrue from softer, less variable but more reactive clinker. Other less
tangible benefits could also be expected from the lower burning zone temperature and stable kiln operation.
These savings were predicted for kilns which were considered by Blue Circle to be well run and the total
savings would be in excess of £2million per annum.

7. BENEFITS OBTAINED : THE REASON FOR BLUE CIRCLE'S SATISFACTION

Some 60% of Blue Circle U.K. Clinker is now being produced with the aid of an expert control system. A
total of nine kilns covering wet process, filter cake process, Lepol process and dry process have been
equipped. Experience with the system has (after initial hiccups) been very favorable with all works achieving
substantial benefits.

All works report that the more stable kiln operation under this form of control has led to generally lower
burning temperatures (See Figure 6) giving rise to direct fuel savings of 1-5% and increased average output of
more than 5% due to increased kiln availability. One feature often remarked upon has been the elimination of
the shift "changeover syndrome" since the kiln often used to become unstable at shift changeover as the next
operator attempted his own "cures".

The changeover to integrated working within the industry has meant that as many as fourteen operators can be
responsible for "driving" the kiln over a period of one working pattern (several weeks) and the Expert System
is proving to be of great value in helping to accommodate this change.

Refractory costs have been reduced by some 10% and one kiln has run for the longest campaign between brick
repairs ever.

At one Lepol plant the cement milling energy consumption has dropped by more than 10% and all of this is
attributed to the more stable kiln conditions obtained under expert system kiln control. Other works have not
achieved as much reduction as this but the situation is clouded by the use of inter-works clinker transfer which
mean that not all clinker is burnt using an expert control system. (Figure 7) illustrates how the system saves
milling
energy by producing more clinker of optimum quality which requires lower grindable energy than clinker
produced at high temperature.

Whilst the financial benefits of improved uniformity of cement quality are difficult to assess the uniformity is
conceived as a distinct advantage in a competitive market.

Thus overall, the actual benefits achieved are providing Blue Circle with a pay back period measured in
months rather than years on the Expert System investment and the total benefits from all U.K. works
handsomely exceeds the £2million pound per annum predicted.

The target for running time on computers is 90% of kiln available time and this is being achieved at several
works. Others are less fortunate and the actual achieved percentage varies from 60%-90%

The works with the lower achievement will strive to improve their performance and inter-works visits to pick
up enhancements and program developments are encouraged. These tend to be minor adjustments after the
original tuning but can nonetheless make an effective contribution to profits.

It is intended that the remaining 40% of clinker production will have the technology implemented in the future
though some of the remaining works have extra problems such as control rooms that simply cannot be
extended and will not house the equipment in present form. Naturally the wholesale transfer of the control
room to alternative premises presents a much longer and more expensive project than simple application of an
expert system.

A summary of the major benefits according to Blue Circle as a result of the application of expert control can
be found in (Figure 8).
APPENDIX I

SOME REASONS WHY KILN DOES NOT REMAIN IN OPTIMUM BURNING CONDITION

1. Slurry chemical composition changes

2. Slurry physical composition (residue) changes

3. Slurry Moisture content changes

4. Slurry flowrate to kiln changes

5. Coal chemical composition changes

6. Coal ash content changes

7. Coal moisture content changes

8. Coal physical composition changes (residue)

9. Coal flow rate varies

10. Heat loss from kiln changes (e.g. rain on shell)

11. Amount of inleaking air changes (e.g. inlet seal gap changes outlet seal gas changes, clinker ring builds,
mill ring builds)

12. Kiln speed changes

13. Coating falls away from kiln lining

14. Bricks spall or wear

15. Production of dust in kiln changes

16. Flow of air through kiln changes - e.g. fan blades coat with dust

17. Temperature of secondary air changes - e.g. clinker size change - waste cooler gas flow rate changes,
cooler chamber fan air changes, bed depth in cooler changes - amount of air leaking from cooler chamber
changes.
APPENDIX 2

AVAILABILITY & RELIABILITY OF SENSORS

At an early stage in the development of the Expert System it became apparent that information on the state of
the burning zone was severely lacking.

The methods of assessing the "Burning Zone Temperature" were either direct two color pyrometer temperature
measurement or kiln power measurement. A third system for which some success is claimed is the radioactive
sensor measurement of the angle of climb of the feed which the kiln power only does by inference. Blue Circle
has not yet tried this system primarily because of the high expense (£50,000 estimated).

The two color pyrometer still suffers from interference by dust and "flame flicker" and the kiln power has
proved to be unspecific insensitive and provides the information too late. This of course is a generalization
and cases are known where it serves reasonably well.

We needed a more responsive and specific measurement to assess the Burning Zone condition and after some
initial problems, we adopted a nitrous oxide analyzer which uses the same sample as the kiln backend oxygen
and C.O. analyzer. This has proved to be a vital component in the development of the expert system and Blue
Circle advocate the adoption of the measurement as a precursor to, or on an - impoverished works a poor
substitute for, the provision of a complete cohesive expert control system.

FIGURE I - Outline of a Dry Process Plant

FIGURE 2 - A Typical Expert System

FIGURE 3 - How BZT, O2 and BET are Controlled

FIGURE 4 - Aims of High Level Control

FIGURE 5 - Strategy Overview

FIGURE 6 - Fuel Consumption Vs NOX

FIGURE 7 - Clinker Quality Vs BZT

FIGURE 8 - Summary of Major Benefits

W. HENDERSON
CHIEF ELECTRICAL/PROCESS CONTROL ENGINEER
BLUE CIRCLE INDUSTRIES PLC

OCTOBER 1988
FIG 3. HOW ARE BZT, O2 & BET CONTROLLED ?

THERE ARE ONLY 4 INDEPENDENT CONTROL PARAMETERS

ie. COAL, FEED, DAMPER & SPEED

WHAT EFFECT DO THESE HAVE ON THE PROCESS ?

1. +ve COAL change gives -ve O2 (combustion)

+ve BZT (later due to thermal inertia)

+ve BET (more heat in kiln)

2. +ve FEED change gives -ve O2 (decarbonaton)

-ve BZT (heat absorbed by meal)

-ve BET (heat absorbed by meal)

3. +ve DAMPER change gives +ve O2 (more air)

-ve BZT (lower flame temperature)

(heat shifts from BZ to BE)


+ve BET (poorer heat transfer to feed

4. KILN SPEED GOVERNS FEED RESIDENCE TIME

DECREASE SPEED for a LOW BZT

RAISE SPEED WHEN FEEDING KILN (constant degree of fill)

GENERALLY SPEED PROPORTIONAL TO FEED


FIG 4. AIM OF HIGH LEVEL KILN CONTROL

TO KEEP BZT, O2, BET TO THEIR OPTIMUM VALUES

eg.BZT TOO HIGH - WASTE FUEL

BZT TOO LOW - UNSTABLE KILN

O2 TOO HIGH - WASTE FUEL

O2 TOO LOW - REDUCING CONDITIONS

BET TOO HIGH - WASTE FUEL

BET TOO LOW - INADEQUATE FEED PREPARATION

GENERALLY, KILN BURNERS OVERBURN TO ENSURE A STABLE KILN

FOUR SHIFT SYSTEM LEADS TO FOUR DIFFEERENT CONTROL STRATEGIES

LINKMAN ENABLES A SINGLE CONTROL STRATEGY TO BE PROGRESSIVELY OPTIMISED


FIG 8. SUMMARY OF MAJOR BENIFITS OF CEMENT BASED HIGH LEVEL CONTROL

TYPICAL RANCE BEST ACHIEVED

Standard fuel consumption is substantially -2.5% to - 5% -10%


reduced

Clinker outputs can be increased over and above +2.5% to +5% +10%
the equivalent to the reduced standard fuel
consumption

Product quality is significantly improved and+2.5% to +5% +10%


clinker grindability reduced

Milling costs are reduced in line with the -7.5% to -15% -30%
improved product quality and reduced grindability

Peak and average refractory temperatures and -50oC to -100oC -200oC


associated cyclic thermal stresses, are reduced

Reftactory life is increased “BEST” 30% plus

Kiln exit nox levels with respect to both -25% -50%


pre-linkman and pre-nox monitoring periods are
reduced

running times are improved 80% 90%

IN ADDITION

Kiln specific knowledge concerning both the process and process dynamics is greatly enhanced

Improved working practices can be developed

High level control superimposes a consistent approach to control and eliminates the normal shift variations

The system offers a powerful management data collection and logging facility

High level control opens up an opportunity for management to better manage the process and its operation
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 3

CE Refresher Articles
A new CE REFRESHERS~X~S~II thisissueon
instrumentation and techniques applicable
to the control of processes. The subjects to
be covered are:
II Basic concepts
l Basic control modes
.m Tuning process controllers
m Techniques of feedback control
I: Combining feedback control loops
n Instrument scaling
m Advanced control techniques -
a Advanced control
q Microprocessor regulatory control
U Process control computers

Basic concepts, teminology and


tec for process control
How the interplay among the measured, load and control
variables is established in order to achieve required
objectives for controlling process operations.

Lewis M. Gordon, The Foxboru Co.

0 Any study of process control must begin by investigac- In this example, the control system manipulates the
ing the concept of a “process.” From a prwiucrion position of d steam vaive. However. the temperawre of
\,ietqpcGnl, it is gencnlly rhoughr of as a place where rhe water depends not only on rhe position of this valve
nwcrials a n d , most often. e n e r g y come together 10 but also on rhc flow-ate of the’water. its inlet tempera-
produce a desired producr. From a control Aewpoinr. ture. the emhalpy of the steam, the degree of fouling in
Lhe meaning is more specific. A process is identified as Ihe eschangcr. and the ambient temperacure.
having one or more variables ass&red wirh it Lhac are This simple es;imple illusrratcs con~rokd. manipu-
important enough for their values to k known and for latcd and load variables-the rhrce c~cgnr its awkared
them to tx conrroiled. with every process under control (Fig. lb). Thr p;lramc-
I~~itiail~. in this ntfw CE REFRESHER (see accompany- ters that indicate product qualiry ur Lllc o[xrGing condi-
ing box for series topics), we will concentrare on proc- tion of the process arc c&d controlled ~;tli;~t>lc~, such as
ews haling only one controlled variable, such as Lhe pressure, level. renqxralure. PI-i. spklic grdvity or ckn-
hear-exchange process shown in Fig. la. TO maincain the sky. composition, moisture cuntcnt, weight and s~vu.I,
temperarure of the produa (hot rater) in this process, and other variables, depending on the prwcss.
arlolher variable influencing the variable b&g con- >ianipuIaced variables include vakc posirion. da1ttwr
rrokd must be available for manipufarion by the conrrol position, motor speed and bMc pitch. Furdw. CHIC
sy>tern. control loop is often rllanipulatcd f’or colltrolli~l~ ;IIIUIIIL.I

38 c !It.\!,r ?I i’.:.:,,! k,\C I,,,’ “4 !I-:: -


variable in more complicated control xhemes. For exam- signal based on the difference between the actual and
ple. a flow variable is-manipulated co control a tempera- reference-measurement values. For fccdjb-ward systems,
ture or a level. the control signal is generated from values based on the
.UI variables affecting a controlled variable, other than nrious load variables as they affect the prcxess.
the one being manipulated, are defined as loads. Both
loads and the manipulated variable may influence a Feedback systems
controlled variable from either the supply side or the Feedback systems are more common than feedfor-
demand side of the process. For example. the ourler ward ones. The svucture of a feedback loop is shown in
temperature of a heat exchanger can be controlled by Fig. 2. Here, the Miue of the controlled variable re-
manipulating the stram valve, while rank level can be sponds co the ner effect of the loads and the manipulated
controlled by manipulating a valve on the ouctlow from variable. A sensorluansmitter mezures the current
ihe tank. Often, a controlled variable in one process is a value of the controlled variable and sends a signal to the
load variable for another. For example, the temperature feedback controller. where the signal is compared (by
of the outlet stream from a heat exchanger will almost subuaction) to a reference value. The control funcrion
certainly affect other plant variables-otherwise. it would within the controller generates a signal, which positions a
not be important enough to control. valve on he basis of the sign and magnitude of the dif-
ference between the measurement and the reference or
The control problem secpoinr values.
The relationship among controlled, manipulated and In the example for the heat exchanger, a temperature
load variables qualifies the need for process conrrol. The txansmitter continuously generates a signal that repre-
manipulated variable and the various load variables may sents the aaual temperature of the hot water. AC the
either increase or decrease the controlled variable, de- controller. this signal is subtracted from an operator-y
pending on the design of the process. Changes in the value thar represents the desired temperature. If thes+
controlled variable reflect the baiance berween the loads values are the same. the cur-rem position of the steam
and the manipulated variable. valve is correcL and the controller will not change its
For the heat exchanger, increases in steam-valve open- output. However, if &he actual value is below the refer-
ing, steam enrhalpy. inlet temperature and ambient tem- ence value, the conrroller will change its output in the
pemure tend co raise the product temperature. while it direction that opens the steam valve and raises the actual
is lowered by increases in flowxate and exchanger foul- temperature. Conversely. if the actual rempemrure is
ing. The temperature responds to cfie net effect of these above the desired one. the controller till change its
influences. If rhe positive influences are greater than the output in the direction rhat closes the steam ralve. to
negative, the temperature will rise. If the reverse is true. lower the actual temperature.
the temperature will fall. lf all the load variables were CO Thus, a feedback controller solves the control pro&
remain constant, the steam valve could then be adjusted lem through a trial-anderror procedure. Asume that a
until rhe product temprrarure we constant at the de- change in the load variables upsets the temperature. and
sir4 wlue. and would remain there indefinitely. a new valve position is required. The controller becomes
Process control equipment is needed because these aware of the upser when the imbalance between the
variables do not remain constant. For example, varia- loads and the manipulated variable begins to change the
tions in inlet temperature and flowrare both upset prod- controlled variable. The controller immediately begins
uct tempera&z, and require a different steam-valve to make corrective changes in its ouputs-even as is
position in order for water temperature 10 be maintained monitors the effect of these changes on the controlled
at the desired value. The job of rhe conrrol sysrem is to variable. When the controller sees thar its corrections
determine and continuously update this valve posirion as have returned the controlled variable to the desired
load conditions change. value (i.e., difference equals zero). it holds the output
Generally, the control problem is co determine the one steady and continues to observe the controlled variable.
value of the manipulated variable that establishes a and waits for the next: upset.
balance among all the influences on the conrrollrd vari-
able and keep the variable sready at a desired value. Feedfonvard systems
Other factors such as speed of response. shape of re- N’hile feedback control is reactive in nature and re-
sponse, and operator interface are also imporrant in sponds to the effect of an upset, feedforward schemes
designing control systems. respond directly to upsets and, thus. offer improved
No matter how complicated, every control system control.
solves this same basic problem. and for a given process 7-he hlxk diagram ofa feedforward-controf scheme is
and k;id conditions must arrive at the sxw rcsulr. shown in Fig. 3. Transmitrers measure the values of the
The control prohlcrn can bc solved in (JIIIY WY) W)‘s. load variables. and a calculation unit computes the CCC-
each of which currespoclds t o a l&c: control-\)*tcm recc control signal for the existing load conditions 34
design philosophy, Frrdf!d sy~tcrns gcwr:ltc tflc conrrol reference value. In this way. changes in lo;tcl cc~rldiric~llr
Inside a feedback controller
Signal from Signal to Regardless of the hardware used for implementation.
Tcmperaturcrersor/tranuniner control rooin
control room *\ \
t
-the concept of feedback control remains the same. The
first feedback mechanisms were mechanically connected
directly to the procas and the manipulated variable.
When pneumatic and electronic transmission made cen-
Hot
Cold waler wrer
ual control rooms possible, pneumatic and electronic
controllers were developed.
The state of the an today is distributed control
through digital systems, and controllers now often exist
I ’
+ in software. Dig&d systems may have an extensive selec-
Condensate don of features such as automatic alarming. output
a. Pr- damps, and built-in linearization or signal compensa-
tion. However, none of these change the &sic function
of the feedback controUer+o solve the control
Manipulated variable
- problem.
COll~rOh?d
P r - L All feedback comrollen must have certain common
Lwd variables variable elements (Fig. 4). The feedbackconuol function always
*
has two inputs and one output One input will be the
b. Variables measurement signal from the uansmirter; the orher, the
reference value. For feedback conuollen, the reference
signal is c&xl the setpoint, which usually represents the
Heat exchangef~ represents a simple process Fig. 1 desired value of the measuremem
For simple loops, the reference signal may be entered
directly by the operator and is c&xl a “local” setpoim
In compliaced schemes, this signal can come from
cause a direct change in rhe conuol signal without another insu-ument and is defmed as a “remote” xc-
waiting for the controlled variable to be upset. point Often, the cornroller can accept both types of
In general, this technique is more complicated and serpoinu. and a rcmoc&cal switch is available for the
more expensive. It requires greater process undersrand- operator to select which one the controller will use.
ing than trial-and-error feedback. Therefore, feedfor- Within rhc conrroUer. measurement and setpoint val-
ward conuol is usually reserved for difficulr and critical ues are compared by subtraction. The difference is
applications. &led the error and is the input to the mechanism, drcuit
or algorithm that generates the output. Generally, this
response contains proponional. integral and derivative
(PID) componenu. although they may not aU be present
in every controller. Proponional or integral responds to
temperawe error, while derivative usually raponds directly IO mea-
suremem. The sum of the indiCdual responses forms
temperanrre rhe automatic control signal.

Startups and emergencies


For smrrup and emergency conditions, the controller
wiU also include a manual control-signal generator that
can be driven by the operator. When the ourpur comes
from the PID response generator, me controller is said
Cond&ate to be in ‘auromatic” When the output comes from the
. . Process and variables manual generator, the conuolIer is said to be in ‘man-
ual.” The procedure for switching between rhese two
outputs will range from fairly involved to viinuaUy crans-
parent, depending on the sophisrimcion of the conuol-
Ier. The important thing is not to ‘bump” the output
signal and cause an upset to the process.
In simple loops, this signal will direcdy position a
valve, while in morc~omplicated schemes, the signal will
be an input to another instrument Typically, the con-
troller will have an associated operator interface. AS a
R- minimum, this interface wiU display the setpoints. mea-
variable
-I
L J surement. current output and the remote/local and
b. Feedback Iooo automatidmanual stands.
Just as aII feedback controllers have certain elements
Feedbrrck control usw in common, so do Al feedback-conrrol loops share three
measurtment of controlled variable Fig. 2
unpomnt concepts: open VS. closed loop, positive vs.
ncgntive feedback, and oscillation. Let us now examine
in some detail the signitieance of these characteristics for
fedbac k loops.
Aefem-ca
.-1-
open vs. closed loop
Fig. 2 also illuscraces the first of rhese concepu. Once a
feedback controller is installed on a process and placed
in automatic, a closed loop is creared. The controller
output affects the measurement, and vice versa. This
closed 100~ creates the porsibiliry of control through
feedback.
Should this effect be broken in either direction, the
loop is said to b-e open, and feedback control no longer
exists. Several events can open a feedback loop:
r
4
Manipulated
variable c3ntmlled
a Placing the controller in manual. This causes rhe Raau L
variable
output to remain constant (unless changed by the opera-
tor) even if rhe measurement changes.
a Failure of the sensor or transmircer. This ends he
ability of the controller co obsene the controlled Feedforward control uses
vxiable. measurements of load variables Fiq. 3
l Saruration of the controller output at 0 or 100% of
scale. This ends the ability of the controller to influence
the process. reinforced the change in measurement. This is positive
l Failure of the valve actuator because of friction or feedback.
debris in the valve. For a feedback loop to be successful, it must have
When a control loop does not seem to be operating negative feedback. The controller must change irs out-
properly, the first thing CO check is whether or nor the put in the direction that opposes the change in measure-
loop is closed. Often, a great deal of time is wasted trying ment. Fig. 5b shows the same loop, except that the
to adjust a controller when the problem is elsewhere in conuoller has been set to increase-decrease action. The
the loop. controller then responds to increases in temperature by
closing the valve. A decrease in temperarure causes the
Positive vs. negative fdback controller to open the valve. These responses tend co
Connecting a controller to a process, as shown in Fig. drive the mezurement back toward the setpoint. Select-
2. creates a closed feedback loop. However, feedback can ing the proper control ation is as fundamental as mak-
be either positive or negative, and the difference is ing sure the loop is truly closed. The wrong choice
crucial to the loop’s performance. destroys control.
Every feedback controller will have a means of chang- The correct choice for feedback will depend on the
ing the controller action, which defines the direction of
the controller response co a change in the measurement
increase-increase (or, direct) action causes the controller
co increase ifs output in response to an increasing mea-
surement. Increase-decrease (or, reverse) action causes opratorinterface
I I
the controller to decrease its oucput when the mea-
surement increases. Choosing the wrong action will Local or
make control impossible. remote
setpoint signal
Fig. 5a shows a possible record of an ourput-rempera-
cure control loop installed on the hear exchanger of Fig.
2. The steam valve is set air-to-open (i.e., fail closed).
This means that an increasing control signal 41 open
the valve to increase steam flow. The controller action is control
set to increase-increase, which is incorrect. signal
:i. ~.t~
The measurement may be brought co the setpoinr
under manual control, but as soon as the controller is
placed in automatic, the loop becomes unstable. Any
small disturbance that increases the temperature will
also cause an increase in controller output. This opens
the valve. causing the temperature to increase fuher
and the valve to continue opening. The result is a
runaway temperacure. If a small disturbance caused the
temperature to drop, the controller would close the
valve, and the temperature would fall even more. In turn.
this would cause the valve co close even more. &sic elements of a feedback controller Fig. 4
In both cases, the response of the controller has
cause of the lags within the process, the outJet tempera-
ture does not respond immediately. In fact, it continues
LO move away from the setpoint. The controller then
continues to change its output until the measurement
turns around and begins to return to the serpoinr.
When the measurement reverses itself, so will the
controller output, but the effect of this reversal will also
be de!ayed. Later. the measurement may reverse a sec-
ond time and cause another reversal in the controller
output In turn. this causes another reversal in the
measurement. and so on. The result is an oscillation in
both the measurement and the controller ouput.
Thus. the combination of negative feedback and lags
T i m e - . Time -
in the process means that oscillation is the natural re-
a. Positive feedback causes instability sponse of a feedback control loop to an upset. The
characteristics of this oscillation are the primary means
for evaluating the performance of the control luop.
Specificaily, an instrument engineer will be interested in
the period and the damping ratio of the cycle.
Fig. 5c shows a typical oscillation. The period of this
cycle may be measured as the time (usually in minutes)
between any two analogous points, such as between two
positive or negative peaks. Fig. 5~ also shows another
oscillation that is steadily decaying to a constant signal.
The damping ratio measures the rate of decay.
;Uthough there are mathematical definitions of the
damping ratio, practically it may be measured as rhe ratio
Time - Ti.me -
of the desiations of any two successive peaks from the
b. Nqative f&back c.wies mbility estimated final or average value. These measurements
are usually taken from a record of the controlled variable
. - because it is often recorded. However. the same cycle
Camoing ratio -d/A
can be observed in the controller output, or in any
measurement directly affected by the control signal. For
example, if a record were kept of the steam flow to the
heat exchanger. ihe cycle would also appear rhere. Fre-
quently, other variables will provide a more sensiGe
represcnrarion of the C$ZS within a loop. and these will
Time- Time- allow more accurate evaluation of loop performance to
c. Orcill~ing ~ignalr be made.

Control actions affect performance Characteristics of the oscillation


of a closed feedback loop Fig. 5 The exact characteristics of the oscillation in a par-ticu-
iar loop will mainly depend on the adjustments to the
proponional, integral and derivative responses within
applicadon. For example. if tank level is controlled by the controller. Incorrect adjustments can make this pc’-
manipulating an air-to-open valve on the outflow, in- riod too long or too short. Even worse. they can make the
crease-increase action will be needed. >foking the same cycle grow larger instead of smaller.
control valve to the inflow requires increase-decrease For good control, the cycle in the measurement signal
action. Reversing the action of the valve to air-to-dose should steadily decay, and end with the measurement
(i.e. fail open) can reverse the required control action. returned to the setpoint. Simultaneously, the cycle in the
A controller taken out for maintenance might nol be controller output should also steadily decay, and end
set correctly when it is reinstalled. Sometimes. position- kith the output at the new value. This reestablishes
en on valves can reverse the response of the valves to a balance among the load variables and the manipulated
change in the control signal. The penalty for not rhink- variable.
ing this out is a control loop that dribes the measurement In fact, this oscillation represents the trial-and-error
to one of its range limits. search for the new solution to the control problem. The
controller is not aware of the load \ariablcs. Hence.
Oscillation when it sees the measurement begin to change. it I&S
5Vhile negative feedback is necessary for control, it new output values until it nxrows in on the 01ic 19Iuc
also leads to oscillation within the loop. Once again. let char returns the measurement to the sc.lIx)itlt.
us consider the temperature control loop in Fig. 2. L$‘hen If the controller in a particular loop rcspotrds to an
the measurement begins to move away from the set- upset with an oscillation in which each succcssivc pc;~k is
point. the controller begins to change its output. Be- one-fourth as large as the preceding one, the ttwp is z;&i
to have quarter-uave damping (i.e., B/A = l/4 in Fig.
2). Depending on the period. a Ic+p having quaner-
wave damping stabilizes fairly quickly folIowing an upset.
Often, this is taken as an indication of gLw>d control.
Determining proper controller adjustmcnti is somewhat
more complicared than achieving this one objective.
Nevenheless. quarter-crave damping may be used for a
rough evaluation of controller perform3nce.

Process characteristics
The existence of lags in the proces has a fundamental
effect on the performance of the feedback loop. Without
understanding the causes and characteristics of these
lags. it is impossible to evaluate which control modes
(propordonal, integsai, derivative) will he required, or
whether feedback control wilt be successful in any par&-
uhr applic;lrion. Basically, lags may be considered in two
categories: deddtime and capacity.

Deadtime I

A process that has essentially pure deadtime response Time- Time -


strp cflmga Cl/ding tiquh
is shown in Fig. 6a. A hopper valve deposio material on a a. Deadtime d&v
moving &IL A weight transmitter measures dre amount
of material. How dms the weight measurement respond
to changes in the control signal to the hopper valve?
As shown in Fig. 6a. a step change in the control signal
will immediately begin co deposit more material on the
Mr. This srep change will appear in the measurement
after a delay (deaddme) chat corresponds to the rime
necessary for the material co trdvei from tie hopper co
the sensor.
In general, deadtime is defined as the time delay
between a change in the control signal and the beginning
of its effect on the measurement. The shape of the
change in the control signal is not relevant Fig. 6a also
shows an oscillating control-signal input delayed by the
same time interval.
Because deadrime is often caused by the rime required
co move material from one point to another. it may be
referred co as rransporc lag or distance/velocity lag. The
actual time depends on the distance traveled and the
velocity of the material.
Delay in the process response can be created in other
whys. The performance of mixers (i.e.. agitators) has a
large intlucnce on the deadrime in loops monitoring
composition, such as pi-i, density, or oxidation-reduction Time - Time -
potential. The sampling operation of a chromatic ana-
lyzer will Jlso create delay in the perceived measurement.
And. significandy, a combination of a number of capac-
icy-lag elements will also create deadtime.
From a control point of view, what is imponant is the
length of the delay. Deadtime represents an interval
during which the controller has no information about
the effect of a control action already taken.
Deadrime does not slow down the rite at which the
measurement can change. Except for rhe delay. the
measurement changes at rhe same race 3s does the
control signal. Still, the longer the delay. the more 1
difficult it will be co control. AS will be shown, the Time -
amount of desdtime in the prcxess h;rs a strong et-fecr on c. Time constant
the controller adjustments and on the performance rhac
Process characteristia affect type
rim be expected from the Icwlp. of control mode and feedback
Because deAimc interferes with gootl control, every
Tima-

Capacities in series enlarge the delay in response time whenever a change in the input signal 0-t-s Fig. 7

attempt should be made to reduce this delay by properly 2. The capacity inhibit the rare at which the measure-
locating transmitters. specifying sufficient mixing, de- ment can change.
signing proper tankage. and minimizing uansmission Because level is a measure of the liquid stored in the
lags. tank, and because the rate of accumulation (positive or
negative) responds LO the difference krwzen inflow and
Capacity and its effects outflow, level QnnoL change insrandy even if the control
Pure deadtime processes are rare. and vinually every signal does. The bigger the rank in comparison uirh che
conrrot loop will include. and ~21 be dominated by, flows, the sIower rfie level will change. Therefore, rhe
capadty elements. capacity element in the process rends 10 attenuate distur-
.A capacity element is rbar pan of the process system bances. This makes conrrol easier. whereas deadrime
where material or energy can accumulate. The tank makes control more difflculr.
shown in Fig. 6b represents a single apacicy (material The size of a Qpacity is measured by its time constant
storage). Flow inro the Lank is manipulated to affect the Fig. 6c shows. in more derail, the level response of Fig.
Icvel: flow out of rhe tank is the load variable. Initially. 6b. Since the two flows (in Bnd out) approach equality
the level remains consLant because inflow and outflow as)mprotically, they never quile become equal-at least
are equal. How does the response of this process differ in theory. The level never stops changing and. therefore.
from that of a deadtime element? ihe response cannot be measured by the time to
Let us assume that Lhe valve and flow respond in- completion.
stantly IO changes in the control signal. When a step Instead. the response is quantified by a rime constant
change occurs in tis signal, the difference between in- that is defined as the time required to complete 63.2% of
flow and oufflow will immediately cause an increase in the total response. (This number is not arbirrary. IL hat
level. However, as level increases, the gradually increas- significance in Lerms of lhe differential equations that
ing pressure across the drain valve raises the outflow. model the process.) .I\s a first approsimarion. Lhe timt
This tends LO bring the IWO flows back into balance, with consran; of a capacity element will be roughly equal to iti
the net result that level rises more rapidly at first, then residence time, which is defined a~ rhe volume dividec
more slowly. and finally stops as rhe flows become equal. by the throughput (in consistent unit). Thus. if the ~4
The other vessel shown in Fig. 6b also rcpresenu a in Fig. 6b holds 1,000 gal. and flow through the tank i
single capacity (energy storage). Temperature responds 100 gpm. the residence time becomes 1,OOO~lOO = It
to the accumulation of energy in a prtxcss just as level min.
responds IO the accumulation of marcrial. The response Fig. 6b also shows the response of a capacity elemen
of the temperature to a slep change in heat input will be to a cycling control signal. If rflc signal cycles rhe inffok
rhe same as the response of the IcveI to a srcp change in LIIC outflow will approach rheavcrage value of the inffoa
flow input The level will rise while the inflow is grcxcr than th
The responses of these capacity rlcments differ from outflow; and it will fall while the infl(lw is Icss th:m rh
that of the deadtime element in two significant ways: OUtflOW. in shon. for a cycling input. lhc mcxurcmcl
1. No delay occurs before the measurement begins to signal from a capacity clcmem will ;rlso cycle 21 the SUEI
change-i.e.,.no deadtime is associated G.h a single- period.
capxiry element. The variation in the measurcmcm signal. in contpxr
-
: .-

i
2 :,

s loo ~-L---i :
. . =: _
TimS-
. . . . . . .--._ ..-.. ._. . _ .
rwumic &-I.
Go - ,%A.. - -

ic 1 Ak
C
Open-loop responsa of heat exchanger /
AT7v- - - - :
to a step change in controller output f i g . 8

son with the variation in the control signal. depends


strongly on the period. If the control signal cycles very
rapidly (with a short period). the swing in the level will be L
very small. Conversely, if the same variation in the l%lW-
control signal occurs at a much longer period. the swing ovlwnk pi4
in the level will be much greater.
Dynamic eiemena have both
hiodeling the process gain and phase properties Fig9
Single-capacity and pure deadcime processes exist
only in theory. Any real processes will include a number
of each of these dynamic elements. For example. the heat The open-loop response of a heat exchanger to a step
exchanger. shown in Fig. la, includes a deadtime ass&- change in the controller output is shown in Fig. 8.
aced with the time it takes for the hoc water to Row from InitiaIly. the temperature remains constant but lacer
the exchanger to the sensor. In addition, the identifiable begins co rise and approaches a new steady-stare value.
capacities include: Although a process may actually be an intricate collec-
m Volume of the air actuator for the controi valve. tion of deadrime and capacity element, it GUI usually bc
m Volume of the exchanger shellside: represented by a deadrime-plus-capacity model in order
l Energy stored in the rubcs. to design the feedback loop. The parameters for this
a Energy stored in the wxer in the tubes. model may be taken as the apparent deadtime and the
l Energy stored in the thermowell and sensor. apparent time constants.
If the controls are pneumatic. an effective deadtime While this representation may be obvious to the de-
and capacity are also associated with each transmission signer, the controller cannot tell the difference. Since
line. This is a typical situation--one or two identifiable deadtime makes control difftcult while capacity makes it
deadtimes, and a number of large and small capacities. easier, an estimate of the difficulty of control can be
Deadtimes in series are additive-a I-min delay fol- made by olculating the ratio of the apparent deadtime
lowed by a 5-min delay combine to form a 3-min delay. to the apparent time conscam. This ratio. &r,, will
However, the combined effect of a number of capacities also have a strong effect on the control adjustments.
in series is not so obvious. Fig. 7 shows a series of three The behavior of feedback control loops can be under-
capacities having an equal time constanc, rrc, along St& from practical or theoretical points of view. Al-
with the responses at various points to a srep input. The though we have thus far emphasized the practical. under-
step input appears at Point 1. Point 2 shows the response standing the two mathematical concepts of gain and
of a single capacity to a step input, as shown in Fig. 6-c. phase is essential to a fundamental knowledge of feed-
Points 3 and 4 show the effect of subsequent capacities. back control.
The net effect is that a sequence of capacities looks (to
the controller) like the combination of a deadtime delay. Gain and phase
followed by a singte capacity with a rime constant. 71. that An element from a feedback control loop is repre-
is larger than the time constant of the individual sented in Fig. 9. This element could be the process. the
Capacirics. valve. the transmitter or the controller. Each of chest ek
menis has an input and an output. The first parameter, Beginning at any point in the loop, let us consider ti
gain. describes the amount of change in the output that effects on that signal as it travels once around the +
will be caused by a given change in the input Both The signal is made larger or smaller as it passes thnwgh
steady-state and dynamic gains must be considered. For each element. according to the gain of that efemenr .\I
a step input, the output of the element &ins to change the same time, the signal will be somewhat displaced
and approaches a new value. The steady-state gain, G,. is according to the magnitude of the phase angk ax&ted
defined as the ratio of the fina.l change in the output to with that element
change in the input, or: For the cycle to continue, the total effm of these
displacements must equal 360 deg., so that the signal
G, = 4(0Uf)JA(ITL) (1) returns to the beginning point. Therefore: a feedback
It is important to keep track of the unirs for gain. For control loop will cycle at that period which makes the
example, if the steady-state gain of the valve in the sum of the phase angles equal LO 360 deg.
temperature loop were being determined, the output More imponanrly, the net effect on the size of the
would be in units of steam flow, while the input would be signal depends on the product of the individual gairs. or
percentage. Thus, if a 10% change in controller output the open-loop gain, GoL:
caused a change of 200 lb/h in steam flow, the steady-
sLate gain becomes:
where (Go), is the dynamic gain of the controller, (Go,& is
c, = 200/I 0 = 20 (lb/h)/% (2) the dynamic gain of the valve, (Go), is t h e d y -
However, the signals traveling around a control loop namic gain of the process, and (Go), is tie dynamic gain
usually vary cyclically. The sensitivity of an element to a of the transmitter.
cycling input is measured by its dynamic gain. \then the The dimensional units for the individual gains ml?u be
input cycles. the output n-ill also cycle at the same period specified in such a way so that they cancel when the
(see Fig. 6a and 6b). The dynamic gain may be computed open-loop gain is calculated from Eq. (6). If &at gain is
as the ratio of the size of the output swing. A&. to the greater than 1.0, the signal wi!.f arrive at the tx@noing
size of the input sing, A,,, or: larger than when it started. .;is it continues to travel
around the loop. it will continue to grow. At any one
GD = b.J~,, (3) point in the loop such as at the measurement input to the
For the heat exchanger. let us suppose that a 200 lb% controller, the signal r;iu appear as an ever-increzing
variation in steam flow caused a 20°F variation in outlet oscillation. Therefore, a feedback control Icop *ill be
temperature. The dynamic gain for this situation stable only when the product of the dynamic gains in the
becomes: loop is less than 1.0.
Adjustments to proponional, integral and derivatie
CD = 2O=F/200 lb% = O.l”Fl(lbh) (4) responses affect the gain and phase paramc:ers of the
The second parameter of the response of an element controller and, in turn, the behavior of the entire loop.
to a cycling input is the phase angle, which is illusuatcd These concepts bill be explored in subsequent anila of
in Fig. 9. Because of the lags (i.e., delays) *ithin the Lhis s.eria.
element, the peak of the output does not coincide with
the peak of the input. The phase angle, 4. of an element summary
measures this displacement. One complete cycle in any The purpose of every control loop is to find the one
periochc signal is considered to bc composed of 360 value for the control signal that holds the measurerrsnt
degrees. If the peak of the output cycle occurs one- at the setpoint for the existing load conditions. A feed-
quarter of the way through the input cycle, the phase back or feedfomard approach may be used. In this
angle is: article, we have concentrated on feedback techniques In
a later article. we will cover the feedforward approach.
4 = (360)(- I/4) = -90” (3 The next anicle in Lhk CE REFRESHER will appear in
In Eq. (5). the negative sign indicates that the peak of the Aug. 8 issue, and will analyze the actions and MC-
tie output occurs after the peak of the input. This is tions for basic feedbacktontrol modes.
termed a phase lag. It is also possible for the output peak stcxn Lktldw. E&r
to occur &fore the input peak; and this is called a phase
lead.
The author
Closed-loop applications
The parameters of gain and phase are fundamental for
understanding the behavior of a feedback loop. They are
especially impohanr in the study of controller tuning
because both are functions of the period of the input
signal.
\\‘hen a feedback control loop is upset by a change in
either r-he load conditions or setpoint. it will begin to
oscillate at some period characteristic of that loop. Every
element in that loop sees an input signal varying at that
paiod.
Feedback control modes
Control modes are specific responses to a change in the
measured ixiable or error signal. The analysis of control
modes and their combinations quill show how to improve the
stability and speed of response for closed feedback loops.

LetAs 31. Gordon. The Fmbwo Co.”


--

0 UllCkrSI3fltlill~ thC intii\iduJ mt%fcj in a controller is rified in different uniu. The derivative response may- be
essential to Ncccsifully 3pplv feedback conmA These genenred in several ways-and vaqing degrees of inrer-
mdes involve: on-off. propc,nion&onlc. ime;@. and action are possible among the proponional, intepl and
derivative actions. Each posjih[e combinxion represems derixxive modes.
a tradeoff bcrwen cost and performance. For specific situations. many special features have
.A feedback ccwtroller must be connecwd in a closed been added to improve conrrol. such as erremal integral
loop. and apprtrpriare control action se!med. to esrab- feedback. batch switches, tracking. and ourpur biasing.
tish negative fcttlhack. C&en rhea ej+enti. the con- In the future, the flexibility inherenr in digital feedback-
uoller can whe rhe conirol prohlcm hv a tA-and-error algorithms will increase rhe special&&on and variety of
search for the output rhar establishes a balance among aU feedback controllers. Sevenheless, control s!srems will
the influences on the conrrolltd ~ariahle. still be built on the foundation prolided by the basic
Selecting the proper crmtrrJ JcricJn estahiishes nega- resporws.
tive feedback hv cltfinin:: rhr tlirccricrn of rhc controller .A controller is a nonthinking device-its respc
response. The ;wsr ohjecrihc i% UI c!cwrmine the ma+- huilr in. Ir is up co the designer to selexr those a
tude of this response. ate to rhe application. Specifying rhe wrong corn
of control modes leads to poor system performance.
Control modes increases ifie complexity of rhe tuning prohlem. and may
.I controller in ;! feedback k*~p is in J difficult pctsiCic,n. add unnecessary CM.
L’nprediclahlc forces can influrnw rhe mezuremenr ir
is trying to corirrtrl. E\en ~\.c~rse. the tltnamic characteris- on&f control
tics of the rt>t of rhe I(z)p (bill delay and d&ton rhe On+ff or rwo-position response is the simplesr form
output variations used hs the cr~nrrrAx to reduce error. of feedback control loop. Fig. 1 show the performance
In this environment. ‘ir is misleading 10 betiese char of thij loop for a process in which liquid is being heated.
control is imiwwtl on the prrKess. Initead. the relacion- .\n on-off control function has only two pssible out-
ship hetwt~n ;I ccmtroller 2nd the prwess is incentive. puu (on. 100’3: or off. 0%). and only considers the sign
Here, the si/c. &ape and race of rhe variations in the of he error. In rhe example. the controller closes the
controller’s (l(l~puc dre crucial ;LS rhe controller restores fuel \xive ithen rhe measurement rises above the set-
the nIeasurcIIIcII( to th e setpoint value foiloc\-inq an point (Fig. lb). Because of deadtime andAx lags in the
upset. process, the temperature continues to rise before revers-
.A conrrol lntrle is a panicular controller respnse to a ing and mosing roriard the setpoint. $%-hen the tempen-
change in tht mr;lsuremtnt or crrctr. The four basic rure falls beloc~ the setpoinr, Ihe conrroller opens the
responses arc: fuel laive. Deadtime and/or laqs in the prclcess apin
crt-arc a delay before the temperdrure hymns CO rise. .A> ir
crows the itrpc,inr. rhe controller agam shurs off fuel
tbh. and rhc r-y& ‘rtpG%
C:clinq L; rhc normal condirion for a lw~p under on-
r,lf’ c~,ncrc,l. This limiwrion a&es hecause r*ith only tw)
[“AhIt CtUI[JULi the comr&r is unable 10 solve the
cr,nrrrJ prr,hlem esac+. The ~~tpur is tither coo high or
VNJ lore I~J establish a balance among a11 the influtnces
rjn re-4 wmptnture. .A I~Nl’i o u r p u r iupplicc ttu,

’ :i.:::,\:. : ‘.r.:‘,;:.>:..r. \l.,. :...; ., .: 70


ttc~x~fds on che IcnLqh oi’lhe pu-krt and 111~ T;IIC 21 \\llic 11
tl~c mc’asuremenl changes. Since capacirv inIlit)il\ IIIC;I-
SUI~CI~CIN change, chc amplitude is inversely pr~)l~~riol~-
,, %knoid valve al lo Ihe rime COnsLan(. -1,, ol’chc process. [See 1’;1r1 1liw ;I
di\cubsion of period. amplitude and dcadtimc.]
Fuel supply On-off control should be ;rpplied co chose siclwciorls
rvhcrc three conditions arc prcsenc:
a. Procas I. Precise control must not bc required. hxauw clw
nicdsurement will conscantly Cycle. .
9. Deadrime must be modencc t0 prewxr cwwivc
~~II~c wear because of 100 short a period.
3. The ratio T&T! must be small 10 prevent c(m) Lirgv
an amplitude in the measuremcnl c~clc.
It’hcn these conditions apply. chc ;implicir~ ;~ntl (X’OII-
only of on-off control ol’fcr signiiir:~nl ;I~v;IIII;I~~~~.
X \.;1riation of on-off control th;~t rctlctCcs w711’ (it, lhc
Time -
b. Two-position conrrol fiii;ll 0Ixxicor. arid that ni2y bc tlcx rilccd 3s clil‘l~rct~li;il-
~;I[J OI’ pp-XJ~OII Control. is SIICMII itI Fig. Ic. Irr~~c*;~tl 01‘
changing the output in both dirmions at a single prim.
the conwoi ftinction may take action only ac sycuificd
high and low limiu. Xs long as the measurement rcmairts
within the gap. the controller holds the last output state.
AS Fig. IC illustrates, the effect of this variation is IO
extend the period, and to increase the amplitude.
Often, rhe size of rhe gap will be adjusrablc and need
T i m e -
not be s~mmerrical. hence some acceptable compromise
c. Gap-action conrrol can be achieved. Typically. an on-off conrroller will hate
a very small gap designed into its mechanism.
On-off response is the simplest
type of faedback control Fig. 1 Controller rtzsponse: open vs. closed loop
As was disCussed in Part 1 of this series. feecibacl
control requires a closed loop. The closed-loop respwc
much hear. causing the temperature to rise. A 0% output is co a change in the selpoinc or in the measurement
supplies too little heat, allowing rhe temperature to fall. Caused by a load upser. The simplicirv of rhc o~df
Segacive feedback causes +ing betxeen rhe two function allow ic co be presented in terms of irs ~lowd-
conditions. IWJP response. However. the inwraction bc-trww cllc
controller and the process in this configuration obscures
Applying hw-position control the prop&es of the proportional. incegml and dcri\;~-
The principal disadvanrage of on-off control is con- rive control-nw-l:~.
stant cycling: the principal advantage is low cost. Be- .A concrollcr is isolated from a prtKess in or&r IO
Cause of its simplicity. on-off control will be rhe least scud! its open-ltwrp rcsponws (Fig. 2). Hcrc. IIIC (.OIICI.(I~-
expensive approach co feedback control. I C does nor even ler rcceivcs an artificial measurcmcnc and ;I scljm+illl.
require a controller; the same function cxn be creared The diffcrclIcr bctwecn rhesc wlui~~ gvncrxcs ;I,) c~~rtrr
s\ich alarms, contacts. digiral outputs, and relays. signal. and the controller ourpur is r~crcls rwwlctl. III
.I\ccepcabiliry of on-off control depends on the charac- this configuration. the effect of a clungc ‘in IIIC cuntrcrl-
ceristics of the cycle in the measurement If the ampli- ler’s output does not appear ac the measurement puillc
rude of the wing is coo large, unacceptable variations in is.here it would Cause further change in the oucpuc. An>
product quality. or upsets co ocher process units, ma? desired measurement or secpoim Change may be up-
occur. If the period of rhe cycle is COO shon. the wear on plied. and the controller’s response obsened on rhe
the v;~Ive and/or upsets 10 the fuel distribution system recorder.
(Fig. 12) may be unacceprablc.
l’hc period of chc cycle depends on how long ic takes
for the mcasuremenc to turn around after a change in the
valve psilion. Thus. the period is directly proponional
CO dcadtime. Tag. If chc dcadtime vere reduced co zero,
the measurement would inscantly reverse itself 1,+-h each
change in controller output. Since the ourput reverses
each time the measurement crosses the setpoint. both
the period and the amplitude would be reduced to zero.
Control wuld be very good, bu: the valve wear would be
excessi\-e and unacceprablc. Proportional control
Amplitude of the Cycle deFnds on how much the Proportional control is based on chc priltc’il)lv ~II;II 111~.
measurcmcnt changes before it reverses. In cum. this six of [he corlcrollcr rcspw~x sl~oc~ltl IK ~WC~I(II lit111.11 IO
Fig. J is .I pr.lphicat representation of propotional
ackm. ~e.$Wtk~ I)!’ hot+. proponiond xrion is created
(pneumxic. clcirronic or cli$ral). his effect ma)r be
irna+ed as a double-ended pointer. pivoted in rhe Artificial
measurement
middle [for a proportional band = LOOS]. and moving
along an error ~;lle and an output scJe. Changes in
either (he mewremenr or the setpoint create changes in tsolatinq the controller allow5
the error. tr.ltich drives the lefchand end of the pointer. study of ib oper&op response fig. 2
Tk righthand cnc! indicates the corresponding ourpur.
:\s S~OI*X in I:i<. 3. the output Kale descriks increase-
fk~~~sc (lII)t .I( [ion. (Ihanging 10 increse-increase ac-
licm simply I.CXV~\CI rhe ourpur scale. Dy-IEUSli c properties of proportional action
Fig. 3 also illusrrares two propenies of proportional
Measuring proportional action action that have the most influence in a dosed loop. Pro-
Fig. J illu~lrxces 5everaI impoant concepts about potional acrion is borh immediate and specific.
proportionA .rcrion-the first of which is propotion. 1. The linkage &ween the error and the ourpur. rep
band. PB. or gain. G. These adjusmble panmeters define resented by he pointer. means chat the ourput change
HOW stronglv rhr controller reacts to changes in the occurs simultaneously with error change. So deL)s
error. The I(Kxiun of rhe pivot. s sh0tb.n in Fig. 3, fives occur in rhe proponional response.
the amount of output change for a given error change. 2. Each value of the error for a given proportional
With the pivot in the middle, a IO(Jc( change in mezure- band generates a unique value of he output. The pro-
mrnc (from NC, ,below the serpoinr to 50% above it) will porcional-response generator is incapable of any ocher
cause the output to change from O to 100%. Moving the combination. This one-co-one relationship bzween the
pivot to the left can reduce the metiuremenr change error and the output places severe limiwrions on the
required for a IOOS ourpuc ch;n,qe [o 50%. i.e., from closed-loop performance of proportional-only conry’
!?3% below co 57 ;rbove the +ztpGnt. In rhe came way. as 41 be de&bed shoddy.
moving the pivot co the ri,<ht will increase the percent Fig. 4 presenu another graphical represenation ot
change in error required for full-<aIre trat-el. proponional xrion. Each value of the proportional band
The proportional hand, Pi?. is defined as the percent
change in me~surrmrnt (at a constsnt serpoint) required
t0 cause IOUS output change. Gain. C. is defined as the . -_
trio ofrhe oucpur change U-I error chanqe. Both quanrify +s ; . . .. 0
the same thing--the sensiriGry of [he controller to /’
+2s ‘\, 1’ . 25
changes in rhc error. and each can be expressed in terms 0 ‘1 /’
so
of the other: /’ \\
-25 /’ /’ ‘-A 75
r; = I00~‘l;P5 (1) -50 ’ l@J T
i-3 2
‘1‘11~ rcI;lriothip of Eq. (1) can dso k expressed in PS - 100%
2
IhC l~,l-nl 01 ;I Ill;trCflccl scale: Y

Prop0rti00alband.PE.0
2 25 50 1W 2al x0
I 1 I I !
I 8 I I
;o 4 2.0 1.0 0.5 0.2
Gain. G

Proportionat action relates change


in output to change in error Fig. 3
band cquiils WX. l’hcn. fix incrcasc-tlccrc:i.u2 ;~CGOII:
(1111 = (60 - 40)( IOO/jO) + 50 = WB
hcrc (I,,, = oulpul for increase-dccrcasc action.
incrcasc-increase acrion is achicvcd by rcvcrsing IIK
calculation of ihe error for Eq. (1) withill the coll[ruilcr.
Then:
or, = (40 - 60)( 1~30150) + 50 = 10%
The Strdigllkhe reiaknship between error and OLII-
put identifies a proportional-only comrc)ller ;Is a iinca
or coixlanr-gain device. In rhis reprcscllr;ltioil. [llr c\K‘-
l 25 +50 cific character of proportional action IWXII~ III;II IIIC
Error, c; or offser. CO, % coordinates of error and ourput must idcnlii\ 2 /xAll
29 - pr~poniooal Sand
failing on tlic Fven prop)rlic,n;il-b;111(1 line. ;uttl lltt III)-
Relationships between error and output eraling poiiic Ior rllr conIroiicr c3,1 only move ;11011< ll1i.s
for various proportional bands and action Fig. 4 line.
AS the proportional hnlld is decrcascd, p~x~~~~nion~tl
action is concemrated into a narrow’er band around the
defmcs a specific relationship between error. c, and setpoint. From a gain poinl-of-vieh*. rhe same change in
outpur, 0. which may be expressed as: error causes larger changes in output In the link. the
0 = f( 1oo)IPB i 50% proportional band equals zero @in equals infinity). and
(3 the smallest error causes the output to go full scale. On-
where 0 is ou~pur. 8; e is error. %; and PB is proponional off conrrol, then. becomes a limiting case of propor-
band, ‘7%. tional-only control. Xr the other extreme, wile11 the pro-
For example, assume rhar setpoinr is at 60% of scale, portional band equals infinity (gain equals zero). the con-
measurement at 40% of scale. and that propotional rroller simply does nor respond to changes in error.

Applying proportiond-only control


A level process under proporrional-only control is
shown in Fig. 5. h.here [he outflow is the load on the
process. To conrrol the level. rhe controller musi bal-
ance the ourflo~ by manipulating inflo\\.. This requires
increase-decrease action. Borh flops vary from 0 IO
lOO%, the serpoint of the controller is 305. and ihe
proporrional baud equals IOOR,
As a srarring point. assume that the load equals 305
and rhar the le\‘el is at [he sctpoinr. Tlwn. the ~on~r~~ilc~~
output Will also be 50 2. illflow \t.ili cqu;il oulll(~\\.. a11tl
level Gil rcmsiii coiisI:~n~.
Sexl, 3s5umc ;III upsc’t in llic I~WIII of ;I I0;iti tlc~lr.l~c
10 25%. i.e.. cn~l.Ilo\~ i s rcduccti. i-lo\\. Will IIIc I~nll)
respond 10 rhis itp~t?
Since the outftoc~ is less rhan rl\c illll~l\\~ ir\iklii!. Ir\~cl
$4 begin IO rise. and the error \xili begin 10 go IIC~X~W.
By referring to the 1000-PB line in Fig. 4, it ttill be SCCII
Time - that controller output (for increasedecrease action) 4
a. Offset varies with load
.
simultaneously begin to decrease as the operating point
moves totcard rhc upper lefthand corner of rhe chart.
This acGon gradualk restricts inflo\\. until ir equals ‘755
Gt.hen ~IIC level hxs r’isen 10 757c (Fig. 5,). Then. itlfIo\i
equals ou[flot\~. xitl rl~c lcvcl \\ill rcrlk consKmt.

Time -
b . Effeca of narrowing the proponional band

Level process under


proportional-only contrd ‘Fig. 5
0 = t-f IWIPB) f 5 Pa) L
Time-
f” = (PSI lOO)(O - B) (W a Integral action responds to sign. size and duration of error
: .-. . . .
‘I’hs. the purpose of an adjusuble bias becomes clear. c
By changing the bias on rhe proporriona! response CO ----- s&m
41 G
e(!ua! [he rcc!uiretI outpur. he measurement can be - 1
___-- . -

rcturnct! to ihc .wfpc)inr. This adjusunenr is often called


“~ii;ifiu:~l rc\vl.”
:\~suwitq I!W hi.ls remains fixed x 30%. the offset for
;L rcc!uirctl IWI!J~; is also seen to vary stirh the propoc-
tional halld. Kchxring to Fig. 4. if the loading conditions Time -
require a 757 o u t p u t a n d [he propotiona! band is b. Integral time dcterminci rite of response
%W%. offset will he jOs7c. Reducing the propotional
band to 505 reduces the required offset to 12%%.
However. rctlucing [he proponional band also increases
the gain ofrhr controller and reduces rhe damping in the
closed-loop response.
Fig. 5b sho\\.s the effect of narrorAng the proponiona!
band on rhr clcjsed-loop response to a load upset:
l Case A-The controller does nor respond. The mea-
surement falls co a new steady-jute value.
l Case fl-The prnporrional response is too weak.
leading co rxccssi\e ot’fser.
m Case C-The proporrionsl band is correct. The
response of rhe cOnlrfJ!!er is just strong enough CO cause c fnteqd action shih the bias to balance the load
quarrer-wave clamping.
l Case D-The proponional hand is too narrow. The Integf,al atiion improves the control response
overreaction causes excessive s;*ing in the me=urementq L
which takes too lung IO even out.
It’rhe prupor~iona! band is reduced coo much. the gain
in the contrc&x will become high enough CO make the proportional action. integral action also responds 10 the
open-loop g;lirl greater <greater rhan I. lnsread of decay error. However, inregm! action is !xsed on the principle
ins. rht cycle tijr lath the mcasuremenc and rhe control- rhat the response should be proponiona! 10 both the size
Icr 0uqx1t will grow rlnrii the valve cvcles between ifs and durarion of the error.
liiiiils. ;IS iii ~rcl-c~lt’cotltrtt!. The open-loop response in Fig. 6a shows how inregrz!
For cvcry prc~css under proportional-only control, action is related co the error. Initially, while rhe error
OIIC prtict;!;tr proportional hand (i.e.. gain) creates rhe equals zero. the output remains consmm a~ a value char
ttcsirctt c!(Jw!-!wp restnjrix. The exact value wi!! de- depends on the history of the et-t-or. Errors in the mea-
!Jw~! 011 rtw orher elements in the loop. each having surement will produce the following:
indivitIu;l! ~qins. !n genera!. where process gains are low m Point A-.-I constant error appears. The integral
IWC;:LI~C of ;I WU~! ;&T, mcio. the required proportion- responds by driving rhe output at a constant rxe. pro-
a! lc~nt! will ;I!SO be low. Once wnrd. hwever.offsec will portional co rhe size of the error. as long as the error
v;lry c**ith the !(Jad on the prtwess. as in Fig. ja. remains constant.
!‘rop~rtiona! control is a major impwemcnr over on- m Point B-The siLe of the error increases. The inte-
oft’ coIicro{ l~~ausg ot’ its ability to \nhiiiLe the IcMc,~. IIS q-al responds by tlri\-inq the output I[ a faster rate.
ni:lin (!is;lc!v:mr~~e i s tt~e iwvirahlc ot’f~. lVhcre ihe l Point C-. The \ign of the error changes. ‘I’he inrr-
!0;1t!s are filirly constant anti the rquired pro!xAonal qal responds by driving rhe oucpur in the oppwire
II;& is narrow. c,ffscc will not tx I prcjhlcm. ‘The WC- direction.
pc)illr can !K. ;~djus[cc! urjtil [hc ntcasurcmcnt is ;IC the m Poirl[ D-The error returns co zero. The integral
ticSir& yJ!llc. ‘[‘lie .r~(!)oill( iS (tlcn ilo hgcr rtlc &\ircd acticln stops at the existing cmcput value.
Irl~JsL,rc(nctI~ v;iluc lruc Grn!J!y ;I rcltirellcc IiJr propor- a Point E-The error incrcascs a[ a constant MC. The
lion;ll xtioci. inrqral rqxtntls tjy clrivin# the oupur ;ic XI cvcr-ir*
crcaGn# rxe.
Integral action l Poirlt F-. rhc error rcwrns to /c’ro. ‘I’tic iritq:
action cc’;~\cs a( rtiul ouI!~ut.
‘I’ticx rc’sp(ww\ il!ii~rr;irc rllc nio\( 4grlific::trJf !I~‘o!J-
eny of integral action. i\‘hereas prc~por~ional acrion tics
the output ~0 the measurement through rhe error, inre-
gral acrion can achieve any output value-stopping only
when rhe error is zero. This.is the propcrry that enables
integral action IO eliminate offwt. Integral acrion is only
satisfied when the measuremem has rewrned 10 the
Points A e c 0 E

5e~poiix As long as an error exists. integral action will


drive the output in the direction that reduces error.
The open-loop response in Fig. 6b show how propor-
tional and integral actions are combined in a controller.
Initially. the output is constant because the error is zero.
When a step change in the error appears, a simultaneous
step change occurs in the output because of proportional
action (see Fig. 3). The size of this response depends on
the proportional band. AC the same rime. Ihe integral
action &gins to drive the ourput. as shown in Fig. 6a.
For a consent error, the adjustment to integral action
changes the rate at which the ol~tpu~ is driven. This rate
is quantified in terms of the time required for the change
in output (due to integral aaion) 10 equal or repeat the
response caused by proponional action.
Some instrument manufacturers use dimensional
units of minuteslrepeat. referred to as integral time.
Others use units of repeadminute. referred IO as inre-
gral gain. Each is simply the inverse of the ocher, as Derivative action responds
shown in the chart: to rate of change fip. 7
r
_ .

lntegnl time, I,. min/repaf Len-n as a funtion of rhe error. U’hen rhe response is
0.02 0.1 a5I 1.0 5I 10 20 50 complere. rhe bias term has increased to 75%. and the
I I I
I I 1 1 1 I 1 I proponional term has returned to zero. The i5% bias
c4 10 2 1.0 0.2 0.1 0.04 0.02
means that the proportional band has shifted so rhar rhe
Intrgral gain, I,, rcpeadmin
range of proportional action extends from 10% beloc~ CO
30% above the setpoint. Thus, integral action continu-
ously performs the manual-reset function, described
Increasing rhe inlegral time, or luweriny tile integral earlier.
gain. reduces the strength of the integral action. The ability of integral acrion LO eliminate offset is rev
ad\.anrageous. and integral action is almost alw;tys speci-
A p p l y i n g integfd action fied for feedback control. However. this acrion dtrcs haw
The combination of proponional-plus-integral action a significant tlk~dvantagc: To create its grxlu~~l rc-
can also k expressed in equation form: sponse. a capacity-like lag is builr imo the controller.
This causes 2 phase lag across llic co~itrollcr alltl Icl~~l~-
ens tie pcri(KI of oscillalioci of lhc Itn~p. as a fur~c.iic~~~ (11.
ihe relative conrribution o f propw-tiou~l ancl intc.~~~l
U’hen Eq. (1) is compared IO Eq. (3a). which describes actions.
a proportionaknly controller, rhe only difference is in T!+cally. the period of oscillation for a loop under a
the bias term. When propotional-only control is limired properly tuned proportional-plus-integral controller kill
by a fised bias, integral action (Eq. (-I)] uses the integral be 50% longer than if the controller were proponional-
of the error to adjust the bias-stopping when the error only. For relatively fast Imps such as flow conrrol. this
equz~ls zero. will not be significant. However. for slower loops, exren-
Fig. 6c is a represcnt;ttion of how in;cgrdl action sion of [he pcritd can bc a serious limitation. For loops
eliminxes offset. following a load upset. Iiiilially. al 50% where the esacl VAX of the mcasurcmcnr is not critical
load. a 50% output holds the measurcnw~~ at .sctpoint. (2s in Icwl corltrcA). the shorter period of a prc+xxrion;ll-
In rhe steady state, this is also rhc value of lhc wkble only corilrollcr cw be an atl\;lnl;ige.
bids, since the error equals tcro. 'The C~Jllil'Okr has ;1 12x2 IxoIn~rlicm;4l xlion. illre~:‘;~l 3clioll itwxxscs III<*
40% proponjonal band. The 50% bias indicates ihal 111~ K;Lin c~f’lllc corltitrllcr. ‘liw~ IIIII~I trl’ciilicr cxi ~:IIIW rlw
W% \nr-iarion in measurement over which proponional lu0p 10 cyc-Ic. In gcnc~:il. inrcxral time should 1~. prolxw-
ation will occur is cenrered around rfw scrpoint. M’hen donal to how I& lhc pruccss rcspwlds tc) conrrol :tcliolt.
the measurement begins to fall, foIlowing a load in- If the time is too sllort, it will drive 111~ ljl~;~l ~IW’KIIO~ IO
crease. proportional and integral actions return the mea- iu limit bcforc llic mcasurcmcrit is ltblc (0 rc*In~l.
suremrnt 10 the setpoint \-ia a quarter-amplitude T h e n , when TIC mcasurcnlent dws rc*sI’ol~(l, i t \\iII
damped response. o~ershuc the scrIw)int--c;:using the inrcgrlrl IO tlrivr III<
The contriburion of inlcgral is IO increase he bias operclror 10 its oplxA(c limit.
vanced in time. The size of this advance is the den’\
time. D,, min. Derivative action is sometimes erron‘ousiy
referred to as “anticipating” action. (Note: The conrrol-
Icr can only respond to a real error, and cannot antici-
pate rhr arrival of an error.) Increasing the derivative
time will gencr,~[c a dryer derivative responst th3t c\.ill
appear as a larger time difference between the w.0
In these ;ip,plications. a “st\itch” may be added to the responses in Fig. 7b.
integral circuit (c\ herher electronic or pneumatic) of the Following the techniques for proportional anti inre-
controller. This sititch has become knoi\n as a “batch ,gra.l actions, earlier controllers applied derivative xrion
switch” because the windup problem is primarily associ- co he error. However, this causes the derivative action to
ated with diw)nrinuous or batch processes. Sec\er con- respond co both measurement and setpoint changes.
trollers and tr~ntrnl algorithms are designed to avoid the Since setpoint changes are usually made stepwise. this
iIlrcg1;iI ~;ICLII~.II~IIII or i\indup problem. approach often “bumped” the process with large output
spikes, as shown in Fig. 7a.
Adding derivative action Almost universally today, controllers are designed so
I’roI)orrior~.il JIMI integral actions share one serious that the derivative-response generator looks only at the
limkirion. .\ +fiiticsnr error nlUSt b e p r e s e n t before measurement signal. Initially, only the proportion.71 and
cithcr ()I’ ~IICX ttiotles genentes a strong response. integml actions respond CO changes in the setpoint.
Dcrivutivc .icriun is based on the principle char the LL’hen derivative action is combined with proponional
controller shuuld also respond to the rate at which the and integral actions the total response is given by:
mexurcmctrt is changing--s\ en though the actual error
is still small.
The open-luop response in Fig. ia sh0FL.s how deriva-
tive response is related to measurement. (The rate of
change mav he computed z an amount of change divid- where c, a controlled variable, represents the measure-
ed by the time oter rihich the change takes place.) For menr signal.
example. in Fig. ia: Eq. (5) describes an ideal, noninceracting controller.
l Point .4--.-\ step change appears. Because the In most three-mode controllers. some interaction occurs
change takes place in zero time, its rate is infinire. and among the control modes, so that changing any on
derivative action responds tcich an output spike. The the adjusrments has some effect on all the respong
response dir&on Ibill be determined by the controller
action. Fig. ia shu\cs the response for increase-increase Applying derivative action
action. Since :he measurement is steady after the step ‘Incorporating derivative action can significantly im-
change. rhe tleritstive contribution immediately returns prove control for processes having large lags. Derivative
to zero. action is the opposite of integral action. To genemte the
m Point B-.4 second. negative step appears. The de- derivative response, the dynamic inverse of a lag (i.e., a
rivative contribution responds :\ith a negative spike. lead) is built into the controller. Although derivative
a Point C-The measurement begins increasing ar a action also increases the gain of the controller, its lead
const;utc r3te. De&arive respwtds with a constant. posi- characteristics can effectively cancel a lag elsewhere in
rive contrihurioii char is proportional CO the rate of the control loop. and therefore shorten the period of
ctl;lrlge. oscillation. This can more than cancel the increase in the
W l’uint D-~fhe change in the rate of measurement period caused by integral action, even though offset is
u~~tl~rg(~.s .,(I increase. The derivative contt-ibudon in- still eliminated.
creases propurtionacely. The main disadvanrage of derivative action is sensitiv-
w Point E-The measurement stops changing. The ity to noise. Because it reacts to the rate-of-measurement
derivative contribution returns to zero. change, even ve? low-amplitude noise can cause large
The deriv.iriw response is unrelated to the absolute variations in controller output. In effect. the derivative
wlue of the riwsurement. \t’henever the measurement tries to control the noise-an impossible task.
s t o p s changitty, (he d e r i v a t i v e c o n t r i b u t i o n r e t u r n s co Since noisy measurements are usually responsive mea-
z e r o . \Vhen it starts to change, derivarice acrion opposes surements, the reduction in the period offered by derive-
that change whcrher the measurement is moving atcay tive action will nor lx a significant benefit. Hence, deriv-
front or toward the scrpoint. ative action should not be applied to noisy Lops.
The open-loop respunse in Fig. ih shor>.s htw propor- Controlled variables char are slow enough to benefit
tional 2nd tlcrivatice actions .ire combined in a conrrol- from derivative action (e.g., temperature) are usw.lly not
ler. \rL’hen the mr~surtwc’nt starts to change, derivative noisy. One excepti(Jn is the outpur of sampling anlll~zers
xtion g~nc“;~tes an imrnctlist~ roponse proportional to such as chromatographs. This signal, which changes
i t s r:ite of cfl;utge. A s thr nw;Iurcmcnt corlrinues t o stepwise. must be filtered before it is applied to a con-
ch;itixc, rhc ~l~rpuc chxlqs txczlu~c o f prop+Jr[ioncll troller having derivative action.
;ictic,ti. l{cc;ii~~c r,f’ dcriv;itivc xtion. tllc output imrnrtli- ‘I‘he ncxr arricle in rhis CE REFRESHER will ;ippwq
atcly rc;iclic\ ;L value th;lt it ~~~,uld not have reached until the Sept. 19rh issue, and will review principles and p
sc~rllctiillc I;trcr. ccdures for tuning control Iwps.
9nm Danafor. C,h.r
Tuning recess controllers
A review of the basic principles and procedures of controller
tuning will enable engineers to tune a variety of control loops
so as to achieve stability in the loops, and thus the process.
---_._-_ _
Thomas 3. Kinney, The Foxboro Co.
- - -

q (:(~~Crc~llcr tuning is accomplished by measuring cer-


tain contrc)l-ltx)p characrerisrics. The techniques for
these initi;ll measurementi are critical buse hey are
the basis tier >ubsequent controller set&.gs.
.I genemlired runing guide for derermming controller
settings retiuires the use of a model. The mcdeI usually
chosen-representative of many processes and their
control Systems--’ IS one having a first-order lag hi&
deadrime.
The control modes that we will consider are the most
frequently encountered combinations uf proportional,
integral ur reset. 2nd derivative actions. Our discussion
will focus on procedures for determining the final set-
tings for cAch mode. based on the fundamenral process-
control characteristics of capacitance. deadrime. and nac-
Ural per&I.
T o dctermirtr conrroller settings. IWO merhods- Time, [. c-in -
open-loop jrcp response and closed-loop qcling-are
used to ntc;1>ure their ch;iracrerisrics. The former will Open-loop response is typical of
yield the capacitance. T,. and deadrime. Tag; the latter. a process with deadtime and lag fig. 1
rite natural period. q,. (See Pan 1 of rhis series* for rhe
discussion of cJpaci(ance and deadtinte.)
measurement begins to rise is the deadrime. It can be
Open-hop metfiod calculated by measuring the disrance (in.) on the than
‘I’o tlctcrnliltc wp;Gnncr and deadtime via upcn-loop and dividing it by the chart speed (infmin).
r-csI~~sc (;~lso known as rhe rcscrion method). a record- The measurement (Fig. 1) rises to a final value rhar is
ills device having a fast chart-speed (say. % infmin) is the new steady-state. which resulu from r.he step change
ccmnecrtd to the measurement signal. The rest is then made in controller output. From &is curve (approximar-
perforrncd by: ing a response having a first-order lag for a single capaci-
I. PI;&tg the recorder in the high-speed mode, with tance system), the time consrant. deadrime. and the proc-
the culltroller in rhe manual position and the mczure- ess-response rate or slope of rhe control loop can be
IIICIIC lined-our ;I[ a constJnc value. measured.
2. Al:tking a step change to rhe controller’s ~)utput at The units of measurement for calculating the slope are
some tixed v~luc, such as 3 to IO%; and. at the same the cc~nrrollrr settings-usually expressed as percent or
ritnc, m;lkirl# ;I mark on the recorder chart so [hat dead- time. The slope uf the response CUrve should be in units
tinle can he dererntincd. of pcrcenL/time. and is expresxd as:

bhere RR is response rate. I/min: 1Lf is the change in


mr%urrmenc. 5%; I is time. min: and SO is the change in
ourpur, 57(.
The controller scrtings for obuinin# a specific clc~~ri-
- Direction of paper
2cx3
-
-
-

Time-

a. Process b. Chart record

Temperature control for a heat-exchange process is analyzed via open-loop method fig. 2

Icq~ response can bc predicted from rhe results of this erhc final srcp is 10 drw a Langem IO (he masimt11n
methtd by using algorithms developed by Ziegler and rm of rise. and Iu measure the skyc of’ this line IO find
Sichols (J]. Cohen and Coon 121, Shinskey 131. et al. the response IXC. The slope is dctcrmined as f;~llows:
As an example of’ the open-loop method, let IJ~ cun- Y.YF
5idcr wnperature control cf’rhc heat-exchanger process Y-axis: - = !!3’3 change in input
XWF
shown in Fig. 25. Assume rhar rhe temperaLure of the
water leaving the esckmger is 100°F. the remperarure 023
S-axis: - = 0.33 min
transmitter has a span ranting from 0 to 200°F. and rhe 0.73
sleam pressure remains COIISLJIU. A than recorder is ar- Since rhe slope is propnrrional IO Ihc size of rltc ULII~~UI
r~hccl IO rhc measurenwlt signal. The measurement
s~cp. the units IIWSI be normnlizcd before ihe A~x is
k-fore and after rhe SIC)) ch:mge wuld appear 011 rhc coilipulcd. lo 3c~~~11111 for lhc lx’lw111 il~;iii~~~ 111;1clc iii
rcioi&r 2s 5how in Fig. 21~.
rhc nutpur. Hewc. ilie respmx raw. RR. bcccw~~:
For rhis esamplc. Ihc rccordcr has a charI speed 4’ t’,
i11./111i11. AI I’oinI A
(Fis. !!I)). a srcp change of +2tjC; is
III;I& IO rhc ouqw~ 01 rhc con~rollcr. ;wd a~ ~hc SII~IC
Iime ;I mark for the III~:I~~I'CII~CII~ signs1 is made on 111c
charI. .A( Point B. Ihc mcasurcnwnl begins 10 rise. 2nd Closed-loop cycling
rc~hcs a final value of INoF. This tempcraIurc iIlcrcJV2 The tloscd-b*q) c!t,li11g III~IIICWI i s lryul;lr !~~:11t\<
wrrrspwds to the increase in sIeam flow. The than only one lxirawiw i3 1~21si11~~l. Iis tii.\;1dv;1111;1p~ i\ 111;11
trawl through the recorder (afwr Ihe step chanqe i5 s o m e o n l i n e prcxwscs winoI IK’ nllo\\ccl 10 ct& 101.
iii~tie arid before lhc mcasurenienr rises) is 5% in. Since even a shorr pcritri 0l‘Iime. By causiiy a coriirol ltqj I O
the clwI moves aI ?G in./min. the dcadIin1e is calculnIed cycle aI a constant aInl)liIude and perwd. iIs na1111:1l lx-
fio111: riod. 9,. can be dclermined. .r\n example of a 1w.1w1~c’-
ment IhaI is cycling sinusoidallg is shown in Fig. 3.
To induce ~c~~i~~;r~~t-;lr~i~~litudc cycling in a pr(wsb-
cunrrol Itwq). il iz wwss;Irv lo:
Slr.l, 1. .\l:iLr 21irc lIi;tl Ilk I is in ;1 sublc co11diIioii.
U-t, -7. :\djw rhc i11Icgr;1l (I) and/or clrriwi~c (I))
nlcKfC~ IO i&ihww :rc.iicw il' 111r colliroiicr 1~12 ~I,OI c'
Ill:tll 01lc 11l1~lc (i.c.. lwolx~t Iio11;1l ltl115 i111cjy.J. 01' ,)t~t-
1~~11ic~fl:il l1l113 i111cy;il l1111s tlcri\;iIivc).
.W/1 ;. .\l:rL1.;1 \1vl) 1 lr;111gv ii1 Ilit* ~111fI~c~llcr’~ 3c.llwji1il.
i3lld (Jr*ct.\~. llw itwtlli11~ 111~;1’411rciii~11I 1311~.
Slf./t 4. Kctlutc Il1c l~1~ol~n~iio11.1l I~;iirtl liii~lli~i i
nic;Iw1~cntcnI ~$1 IC ~;IIII~M OIII IO :I ~IG~+-~I;~I~ t.B
and. I1Jlcw.i1ig Ii1is. 111;iLc ;iiioIlicr ili;tiigc ii1 111~ 1111111111-
Icr’5 \c.llKtiill.
Time -

Canstant-amplitude cycling is
typical for the closed-loop method Fig. 3

I .lii in.
;= = 2.2 niin sponse will produce a larger total error than Q.-D but
0.75 iii;‘uiin
may Ix acceptable. depending on the particular process
requir~me3irs.
If rhe gain of the conwolkr is incrc;wtl further. prtr
longed cycling will occur from an upset. This c1.p~ oI’
responx’is referred co as ?mclerdnmpecl.” and results in
a snlaller cle\iarion from the setpoint (see curie in Fig.
-lb). Conversely. if rhe gain is reduced, the response co 3 1 1
upset will be reduced. resulting in a large deviation from

Time -
a. Quacar-amplitude damping

Time -

Time -
b. Effscca of proponional band

How fast a loop stabilizes to an upset


depends on proportional band Fig. 4
xrp)int (see cume). and a response referred to as “over-
dompcd.”
‘I‘hc fimnul~s dcvclopcd by Zicylcr and Sichols (I] for
prvdicring controller settings to produce QAB are based
on a process model having a capaciry thar is purely inte-
graring. In the exampie of Fig. 5, the level in rhe tank
cc~rresponds to the integrated value of flow. If a change
in illflow occurs and rhe outflow remains constant. the
tank will either empty or Oveffl(Jw. The steady-state gain,
C,,. of this process is infinity. and the process IS said 10 be
non-selfreguulaiing.
If’ the outflow from the wnk is affected by changes in
the inflow, the level in the tank will likely reach a stead?
scale if the inflow upset is no~ UKI large. This type of
process response is said IO be selfregularing. Cohen and
Gun (21 developed relationships for predicting conrrol- Rario of deadrime IO capaciry. T&T,
ler settings lo account for selfregulation. a. Chart for naiural De&d. T-
However, it is recommended [hat the Ziegler and
Nichols relationships bc used rxhcr lhan those of’Cohen The natural period can be approximated for tuning !
and Gun. unless the r;itio Tar r,,7/51 becomes greater
than 0.1.
Procedures and guidelines for tuning rhe common
combinations of proportional, integral and derivative where (PB)* is the proportional-band setting that pm-
modes. along with criteria for their evaluation, will fol- duces constanr-amplitude cycling.
low. The analysis of selected settings is required to com-
pensate for errors in measuremenr and adjusrmencs. In Open loop
this respect. the procedures may be considered as an iter- 100 - 1 -
arive approach. 0)
PB - s,,?-%
Proportional-only mode \fethod: Cohen and Coon
The proportional-only controller finds application in
processes chat require a fast response and that. ar rhe i 1+f )
100
same time, can tolerare a consWnt deviation from the ser- (4)
PB= T,l-&
pclinr. The amount of this deviation is a function of prc+
ponional band and bias.
The proportional-only control!er has one adjustment If Q.io is nor desired. an increase in the proponiunxl
for tuning. Therclure. Q,a is an acceptable criterion. lxtnd c\.ill result in critical damping: a further incrc.c>c
The recommended serrings are: \\.ill produce overdamping. Decreasing rhe proportional
hand from rhe QilD setring will create undcrdamping.
.\lcthtd: Zicglcr and Nichols, and Shinskey
Closed loop Proportional-plus-integd mode
el‘\le prO~r~~ona~-~)tils-~n~~~rdt (1’1) CWltrdkr is {lnd?-
PB = 2(PB)’ (3 ably the une mu9 olicn ciicountcrcd. 11s xlv:mI~~~~ is
fast response and zero d&&ii fIXJlll rtic sct~m~iiir JI
steady smte. The tuning prcrcdure for LI PI COIII~C~~CI is
sorne\\.hat more diflicul~ 10 evaluate bcc;tusc (WI xlju~l-
menu exist, and many combinations of lhcse will pru
duce Q.-W. Therefore. orher criteria are necessary 10
evaluate the predicted controller settings.
Shinskey (31 has shown that the damped period of a
properly tuned PI controller will be approximately I.&.
For processes in which the natural period, T”, is difficult
10 determine. I~IC \ahle for r, and :l,T can be delerminrd
by rhc’opcn-lt*bp rne~htd: and the nalural frequency. T,.
apprusimarcd frow Fig. Ga.

O”lflO~

Outflow is not affqtted by level in tank Fig. 5


Time -
b. Quarter.ampiitude damping

proportional.plus-integral controller, and the response lo changes in integral action evaluated Fig. 6

Thr reconwended jetrings based on rhe measure- Open Imp


IIleflc of r,,. or r, 2nd robT, or both are:
100 0.9 0.9
Llethod: Ziegler and Sichols PB=T&g= (3.7)(0.67) = o-363
Closed loop PB = 27%
PB = 1(PB)’ 64 I = 3.33 TDo = 2.23 min
I = iJl.2 (jb) Losing the relationships of Eq. (7) yields:
where I is the reset time. min.
Open loop 100 o.gb + -$ = Wl + 0.1s) = o425
100 0.9 pB= ibr f-7 R O.tZ(3.7) '
-=
P5 r/IfR H PB = 233%
I = 3.33 iDr
S!cdl(Kt: mlrrl and Cwn = 3.3(0.65)(s) = O.-M min

IO0 il -? 3 (73)
(‘H= ;I,, R H Subsrituiing into Eq. (8) produces:
PB = 2{PB)’ = 2( l-10) = %O%
I = 0.43 7, = 0.43(2.1) = 0.95 min
The predicted Cohen-and-Coon setting results in a
higher controller gain [where G = 1001PB], because [heir
~lc~i~c~l: Sliinhkry equations contain a factor co account for selfreguiarion.
PB = 2(PB)’ w The Ziegler and Sichols methods make no provision fur
f = 0.43 i, this characteristic. Setrings predicted by the Shinskey
(Sb)
method result from a slighrly different error-anal!,is
Using the rxan~ple of the heat exchanger (Fig. 2) and approach, and are close 10 those of Ziegler 2nd Sichuls
hc rcsul~s t’roln the upen-loop and cluied-loop tees. the for the closed-loop lest. Errors in measurement &wren
whys, rGll IK ctrtem~incd hy each of rhcse mcrhw.Js for rhe opt-t-loop and closed-lclctp tests contribute to 4ightly
;I prc,pc)rtiorl.ll-plLrs-in~~~~~i wrltrollcr c* here r,,r = (J.M ditFerenr predicted settings.
min. r = 2.2 min. T, = 0.33 min. K,, = 3.i:min, and
(f’H)L 2 l40’/0. Proportional, integral and derivative modes
L’siclqL tllc Ziegler and .Vicholh rclatic~nships. Eq. (5~ The three-mode (PID) controller cannot be used on 3
31~1 (6). h2 propurG)nal Ixi11t1. PH. 2nd rcxt rime. 1. t;Jr j n&y measurement. or (Jn one thar changes srepwire.
cktd-tt~p ;d 0pcn-lt~tp rcypcJrlw$ dre calcul2rcd A: because rhe derivative conrriburion is based on the IIICI~-
I,,,.,,., = (t + Q
'I‘hc cn'cctivc dcriwkc time, D,,+, is:

&y/-d = -j+
-+-
I, a

Responses to derivative action for a proportional-


plus-integral-plus-derivative controller Fig. 7
‘1. \\'iwti I>, is larger titan I,. tire wtltributiori to C;I< ii
control action is rcwrrcd. III orhcr xwds. when setting
D, greater tflan I,. this changes the value !br I, more than
f-w Dnr,~,.
‘I‘iw rule-of-thumb is IO never adjust a controller so
th;tr dcrivatiw action is grcarer than integral action.
‘I’lje performance criteria for a PID conrrokr can be
ey;lluated by measuring the damped period. Optimum
tuniyg generally results with a Q.iD-period that is ap-
prostmatel! equal to the natural period. The damped
p&cd ~~~ili be referred to as r,.,n. and is equal to r,..
Recommendations for response settings are:
.\l~riwi: Ziegler and Nichols
Open Ioup
Ion I.2
-= ( I IA)
PB TllT R K
I = 2.0 T,,,- (I II,)
D = 0.3 T,,, (I Ic)
cIlwcf ioop
I'B = I.lx(PB)" ( IL’A)
I = 0.5 7, (I!!!)),
D = r./8 (I 2)
\tcth~ld: Cohen and Coon

(13a)

.\lctfitrl: Siiitiskcy
PB = 4.0(m)* (I-la)
I = 0.3 7, (I-lb)
. D=O.l2s, (IW
Feedback methods for ‘-
process control systems
Special feedback techniques pro\-ide stability and operability
to processes and their associated control loops whenever nonlinear
characteristics are present in the measured or sampled variables.

Thomac j. .tfyOn,Jr., Tk Foxbo-ro Co.

0 Feedback control can be implemented \ia a number


of techniques. In Part 2 of this series.’ a change in the (24
measurement value. or an error 5ignJ-l. MA shown as
being the basic input to the controller for processes hav- iDT = (in& + Af (2b)
ing reasonably linear characteristics. Here, be will ex-
plore some advanced techniques for feedback control. k here T,, is the natural per&i. min; rDr is deadtime. min:
(car)? is process deadtime, min; and Ir is the sampling
The input signal to the controller may be deritcd from
sample dau. ratio control. (Jr the nonlinear charxTeris- inrenA for the feedback measuremem. min.
tics of the process system. A description of each rech-
Sub4turing Eq. (2b) inro Eq. (1) yields:
nique will indicate its applications: 70 = #%T)p + &I (3)
a Sompk &z--The process is dominared by dead-
time: or a control variable is meljured h! 2 derice rhar [then a process has vex-y little deadtime:
supplies an intermittent outpur to the control system.
(%DT)p -4 Af (42)
e.g.. a process chromacograph.
a Rati+The process to be controlled is affected by Then: i,=-lAf (4b)
the ratio between one variable and (at least) one other
in this case. Eq. (4b) indicates that the natural period
variable. The stuond variable can be either jeparrltel!
of oscillation is dominated by the sampling time of the
controltcd ur what is rrrmcrd “wild.”
feedback measurement
l ~l’orciiwfu--’ Ihe process bus a highly nonlinear
.in inline blending prcxess is shown in Fig. 1. Here, an
charxtcrisric such thxr the prcxess gain can rignificsntl~
additive is blended with a main fluid in a liquid-full pipe-
chnngr. 3s a function of either load or setpoinr. The pH
Line. Continuous control is performed by using a dis-
process exhibits such characteristics.
continuous measurement, The analyzer has a j-min
Sample data control sampling time (A = 5) and is connected to a propor-
tional-onl~ (P) controller. For this example. it will k as-
The effect of using a sampled measuremenr habiny a
sumed that the capacitive rime consnnt is essenrjaliy
time interval, &, is to introduce anorhcr deadtime ele- zero, and rhat since (car), & II. (AD+ = 0.
ment into the control loop. Slulriple dcadrime elements Effectively. this is a pure deadrime process under P-
in a loop are additive-i.e.. five I-min clemcnu ;Ire
only crmtrol. and rhe Iwp will oscillate, sn that:
equivalent to ol,e j-min elemeltt. from the biwr[Arint of
clost+l~p &havior, an observer r\ill not h ;rhlc tf) dir i* = 27, (W
tern the individual charclctcrislics of each tltadrirne &-
menr+nly the additive rt’fect. As XI\ \hor\ n in !-‘~lrt l Since rhis process is dominated by the sampling inter-
(Chm. &q.. M;lv 30, pp. G’L-h-4). ;i tvpicA f>r’x-c” hct\ in*< val for the analyzer. Eq. (ja) can be rrrixen as:
both deadtime &d capacity will r,\;ill;ltc.. >tich [hi: r*-2A.t t-l4
changed I O’;i (i.e.. incrcascd from 50 co 60%). ihc i
Additive I ess having the lW% pruporcional-band (PB) cwcrolicr
(fig. ?a) would have a I(wp gain of I .O, and never scccle
out. L\‘ich chc controller see at ‘LDO or GO% PB (Fig. 2b
alid 2~). the concruller loop would be stable. IAIL d~c
mcasuremcnc aould settle out at 557‘ and j2.5Cc. respcc-
tivcly. A high price, at least in terms of settling-time 01 r-
sec. has been incurred to achieve stable control.
Ideally, the measurement should come co setpoint in
one sample period (At). Since measurcmenc is not eqwl
6 CO setpoim when P-only control is used (exccpc for the
I I one load condition where rhe manual bias was SCI KJ
I&------l to 2 f .--
Prozen deadrime, (opt). make the measurement equal IO the setpoint). chc addi-
tion of integral (I) control action is necessary co rcwo~c
lnline blending process is the of’fser.
under closed-loop control Fig. 1 Fig. 3 illustrates the behavior OP intcF;r.&nnly COIIII-(II
in the process of Fig. I co a setpoint change I’wr tiilI’ct.cc~t
inccgr;ll-collcrullcr scrCnK:s a s r&ted UJ ihe rtlitplillS
rime. (Rcmcmt)Cr that a pure dcadlime process undcl
zc' l-only control will oscillate, so chat 7O = -I T,,~.)

.-2? &-J .---- -t- I \$‘hen I = II. the ideal response is achieved without
the need for adding proportional action. However. the
,' dynamic characteristic of the process shown in Fig. I was
s so- idealized by eliminating capacity and real process dead-
rime. Should a prcxess exhibit the assumed characceris-
5 ----_-_--_
> Fial------ ----, tics. the besr control would be l-only. where I = AL
52 The responses in these esamples were initialed via a
$ , Initial secpoint disturbance. Had they been initiated by a posi-
E L
Live (increasing) load disturbance. the results would h,-.u
been the same. A negative (decreasing) load disrurt
Sampiing incw4aI. If, min would have caused the offset (if any) to appear or;.
a. Proponional band = 100% ocher side of the setpoinr.
A generalized reedback-control example for a process
similar to chat of Fig. I is sh0b.n in Fig. 4a. Here. a
steady-sure gain, A’,. has &en included, and the process
deadtime, (:DT)p, is significant but less than the sampling
inrenal, Al.
Fig. 4b iliuscraces the closed-loop responses or the
process. These are somewhat similar co chose illusn-aced
in Fig. 3. However. recovery in one 11 (i.e., the sampling
time) is nor realized for the condkions indicated. The
addition of process deadcime has changed the process
characteristics.
Tlte question now arises \\.hc:Iicr a proportional plus
Sampling intawal, AC. min integral (PI) cunwuller can be used co permit ~IIC I IC’;~.S-
b . Proponional band c 200% uremenc co reach secpuint arcer one A/ when additional
process deadtime. (bob)+,. chat is less than the sampling
time 11 is present. N’hen a serpoinc change is introduced
IO a process such as that shown in Fig. 4a. it is knou-n chat
an error, c. resuiu. such chat:
lit = Ar - AC (3)
ichcre r is chc sccpoinc. and c is the conrrulled variable
(mc~surcnirnc).
.Sc;irring a~ IIK corlirollcr output in Fig. 4,. it is known
ha1 a chatigc in wlrcrollcr o u t p u t . &II. p~wiuccs a
‘ch~rlgc in tl~c. nw;~surclncnt. Jr. SW h chat:
, Ar = A’, h (1;)
Sampling inrerral, 9r. min
c. Proportional band = 430% Tl~c gain oi’ rl~c dcadtimc LICK-~ is not incIutk(
Eq. (6) because the steady-state gdin of any lwrc (1
Closed-loop responses to changes in the rime clcmenr is unily.
setioint for inline blendina oroblem Fiq. 2 Fur a t~\~~-mudc. proportional + intcgrtl. rollcr~~llcr. ;I
%lvirlg Eq. (6) for h. 2nd setting the result equ~ to
Eq. (7). yields:

Eq. (8) can be further modified when the folIoking are


considered:
I. The c term in Eq. (8) is in reality L &use all of
the error occurs when the setpoint change is made. i.e..
e = Ae.
2. The LU term is (he sample rime of the analvzer. Tme, min -
However. the XUXII time is 11 less the process desdime. Responses to integralonly action
or: Lir - (rD,)p for the inline blending problem Fig. 3
Changing Eq. (Y) to reflect these adjustmems yields:

Factoring out the L terms produces:

($q+g) A.1 - (y+g)iTor)r (10)


The objective is co have Ir = Ae in one sample period.
By letting f = (T~~)~, Eq. (10) becomes:

Solving for rhe proporrional band. PB:


Af
PB = 100K,----- (12)
(?r),
and: f = (%T)p (13)
Ratio control
Ratio control is the simplest form of feedforward con-
croi in ttrAt a load variable (the wild tlow) is used IO calcu-
LICC the ~tpc~inr of awdwr cunrrol loop. For rhe most
p:lrt. ratio control is primarily concerned i*ith the ratio
of one Ilowing stream (gas. liquid ur solid, or their com-
binations) with respect co another.
Ratio control can be applied in a manual-set mrxfe
where the user fixes the rariu of one stream Ah respect
to another, or in a variable mttde where the ratio is con-
tinuously adjusted-usually via a feedback lop.
Variable-ratio conrrol i,iil not be discussed in d&l
here because it is &tter handled under feedfon~.ard con-
trol. where a knowledge of instrument scaling ic re-
quired. Variable-ratio control is applied rbhen ujrnc
Time, min -
property of the process or prcxess inputi is n~>r c~~ns~nr.
b. Clowd.looq~ raponses
In such cases, a manuA setting will give an inccJn>i\renr
ratio between the controlled ~IKJ t\ilcl tariahles. For CX-
ample. the ratio (Jf rctxJilcr heat input to coiumn Ired
flow can be ~rlanuai[y SCI. Ficjc..c\cr. if the feed cotnI)c,si-
cion changes significdrttly (;isccl~llirIK no f&l ;ir~sl: /.er is
av;Able) or fouling ;Ilrcrs the hear-trancfer chdrdcttris-
tics of the r&Jiler, the r2lic, nluht IX iricrc.J>ctl {SC tic-
L\‘irh Eq. (I 8). a nonlinear division is replaced by A
ear (conscanr gain) mulriplicaGon. Fig. 5b illu~~ra:cs rhc
preferred control arrangement.
The “R” (rario) and “FC” (flow controller) blcxks uf
Fig. 5b are normally contained in one piece of hardware
called a ratio flow controller. Typical ranges for rhc ra-
tios available in ratio controllers are: 0 to 1.0. 0 w 3.0.
and 0.6 to 1.3. The range chosen is usually based upon
the application.
There is no limit 10 the ratio range that could lx used.
In practice, it should be remembered that a ratio \-~luc is
essentially a “gain.” and that tie higher the ratio XAIC.
1 R = F,/F, the more sensitive the setpoint change becomes I O
a. Direct control of ratio changes in the flow signal of rhe wild stream.
For a given application, the principal ratio furor i\
handled at the transmiltcr level. If a icn-m-one rJ(io is
desired, rhe transmitters are sdc~rcd so as to have a WI-
to-one diffcrcncc in their flow ranges. This choice ;~llows
the signal Icvels of each transmitter 10 be about equal. as
the actual flowrates vary from 0 to 100%. The ratio set-
ting on the instrument faceplate is easily determined. as
illustrated by the following example.
Let: F.,=Oto lOgpm= lOTA (A)
FB=Ooo 100gpm=lOOF’B (B)
R = 0 10 0.1 = O.lR’ (C)
- A = F,/F,
where F’,, F’B and R’ are the percentage instrumenr-sig-
b. Referred control
nal values, expressed as decimals.
Since R = F,,/F, = 0.1, the equation co be soI\
Nonlinaar vs. linear control of ratio Fig. 5
F-* = O.lF’&q
Substituting Eq. (A) and (B) into Eq. (D) gives ihe flow
creased. depending upon the changes. In this instance, a
relationships in rerms of tie instrument signals, or:
temperature conrroller in a stripping section of the col-
umn could be used to continuously adjust (trim) the F’,, = l.OFB (E)
ratio.
Since the coeflicicnt of the Fs term is 1.0. an instru-
In ratio control. the controlled variable is in reality the
merit range would be selected to include the ratio value
ratio. R. of one variable IO another. For example:
of 1.0. Thus. any of the ratio ranges previously mcn-
R = FAIFB (14) rioned could bc used.
The range 0 tc) 1.0 might be considered if the user
where F,, is the flowrare of Irtateriai A, and F8 is the
{canred to ensure t h a t the flow. F.,, ncvcr CWX&~
flowrate of material B.
0. IF,, but could go lower. Chcw,.cillg 0 IO 3.0 pru\it!cs a
Nonlinear characteristics of ratio control wide range of ratios ah~~r the norlll;ll scrtillg of I .(I. ‘1‘11~
0.6 10 1.3 range would provide a higIl-rcsc~l~rlic~tl ;+>I-
Fig. ja illustrates a ratio flow process where the ratio,
men1 about rhe normal setting.
R, is the controlled variable. An examination of the prcx-
If rhe range of F., in Eq. (A) had bren 0 to 15 gpm.
ess-gain characrerisdcs (assuming Fd is varied to maintain
Eq. (E) would &come:
R) shows:
FA = 0.6iF’B (F)
R = F,,( l/FB) (13)
dRldF, = l/FR (16) And. the instrumen( ratio setting for Eq. (F) would then
b-e 0.67.
Changing the manipulalcd variable from F,, IU F8 .so
that FA is the wild variable in Fig. ja yields:
dRldF, = -F,,/(FB)’ (17
Eq. (16) and (17) illustrarc the higllly nonlincx n;I(urc
of Lhe ratio process when R is conlrollcd directly. In this
arrangement. rhe loops would have to be iuncd for the Nonlinear control
worst case (i.e., low flow-ares). This would rrsult in slug- Sonic i~pical pr0ccss-~4i1i cltarxtcristics ;IIc 511(1w
gish conwol and higher loads. The problem is climinared Fig. 6. In a linear prcress (Fis. Ga). 111~ g;lill is U)IISLIIIT
by rearranging Eq. (14) IO: m3wr where the collrrol p&C is set. :\.ss~inliil~ 110 (blllcr
FA=RFB (18) nonlilicar elcmtnis i n rhe conlrul Itn~ps, ;I 0~illrcJlvr

236 OIL.,IL,L I~~.l\tLYI\~: \l.J\‘~\,~,k I,. I”.,


1cGos G
.
100% u
- _.ii
Manipulared variable. m Vanipllated variable, m Manipulared variable. m
a. Constant gain b. Moderately nonlinear c. Highly nonlinear

Process gain characteristics determine whether linear or nonlinear control techniques will apply Fig. 6

tuned x one o@eracing setpoinr $41 remain stable over C. the result is a very low process gain, and a controller
the entire range of operating setpoinrs. hating a vex-y low gain will cause the process to hang ac
Fig. 6b represents a moderately nonlinear prcxess, Point .I or C. The controller output eventually adds
e.g.. the change in slope (gain) is equal co or less than A co enough reagenr co cause the measurement to ovenhooc
1 when rhe manipulared variable varies from 0 co lOOF;. Point 3. This sequence of events usually repeats &elf
The process gain is a function of the openring point. .A indefinitely; and if recorded on a chart. the area between
controller tuned at Point A would behave in a more slug- Points A and C rapidly fills with many lines.
gish manner if the serpoint were moved to Point B. Simi- A pH process described by Shinskey [i] is shown in
larly. a controiler tuned at Point B would lx more re- Fig. 7. Here, a strong-acidlstrong-base neutralization is
sponsive if the process operarion were changed 10 P&x being controlled to a setpoint for the neutral value of
A; and in the extreme could become marginally swhle pH = 7.0. Shinskey calculates the controller gain, G,. for
(sustained oscillation) or even unswhle. this process as 0.033: and the proportional band. PB =
Fig. 6b is typical of many thermal-type prrx-esies. The IOWC,. as 3.030.’
simplest way co overcome moderate nonlinearity of iuch Since mosr industrial controllers do nor have 3 propor-
processes is to include another element in the Lp. hav- tional-hand adjuxment above about 1.000. any attempt
ing characteristics opposite those of the process-gain CO tune a loop similar co the one in this example will be
characteristic. Such an element is the equal-percentage ineffecrite. In fact, it mighr make one assume that rhe
valve whose characteristic is opposite co char of rhe prc-
ess shown in Fig. 6b. The resulting combination of \aive
plus process has decreased the nonlinear characrerisric
fi)r the system. It’thc march between the vahe and prclc-
IS were pcrtiyt. the resultant chxacrerisric ~uld be
cc~w~~lctcly liilcar.
If a tinal operator having a linear inpuc’ourpuc rela-
tionship is used for a process such as char represented in
Fig. 6b. a signal characterizer (having the opposite char- OH
actcrization of the process) could be insrrllled in rhe out-
put of the conrrollcr. This would result in an overall Iin-
ear characteristic.
Fig. 6c is typical of 3 highly nonlinear prcxx\c. c.Y..
hqin change is greater than IO to I. ‘The pH pr(kc?s
&id-base neutralization) is typical. Krllux-rcmpcrxurt
Itx~ps also have 3 similar chara<tlx%c c\.hrn [he fr#iliny-

_.-
I ,11\,,< ,I ,‘.‘.!‘.ii..!‘.‘* ‘.t,:,‘,LtY :, : ..; ?‘li
For exam+. if (PU),, = 300%,. and rhc slope .
(I’tor = 3.OUtYZ. Outside of the deadband. the prupr-
tional balld of the controller is (PB),.

Gap action via a nonlinear controller


A nonlinear controller functions as a gap-acrio!l floar-
ing controller when its output has a slope of zero. In
Fig. 9. flow LO a process is supplied from two pipelines: a
constant or base load is flowing through rhe large valve.
and a manipulated amount through the small valve.
As long as the output from rhe conrroller, SIC. is b-z-
rween 30 and 50%. rhe small valve is able to cunrrol rhe
process. If the small valve tries co open more rhan 70% vr
close less than 30% the base load needs readjuwwllt.
The deadband of the nonlinear controller. C’PC. wuld
- o + be se( ar ~30% with a slope of zero. The scrpoint of \.I’(:
Error could be any value within the dcadband (say. 50’2) and
its nwasuremen( is the output of SIC. The major shorl-
coming of gagackn control is that the mcasuremcm
Characteristics of a nonlinear controller fig. 8
rends to hang at or near the edge of rhe deadband.
Orher applications of gap-action control, using a non-
linear controller. include surge-tank level control. and
controller was inoperative once the proponional-band control of processes having measurement noise. For
adjusrment had been set at its limit. surge-unk level control, the surge vessel will absorb rhe
In order to handle a process having characteristics inflow and not pass it co the downstream process as long
similar IO rhox in Fig. 6 or the pH example, a controller as rhe level is between 20 and SO% of measured hei!hr.
Grh a characterisric opposite that of the process is re- Here, a small amount of gain within the deadband might
quired. This is illustrated in Fig. 8, where the dashed be considered so as to slowly pass rhe surge to a down-
diagonal line represents a linear control characteristic. stream process.
The two adjusimenu available to the user are the In some instances. measurement noise in rhe
deadband width and the slope of the line G&in the such as rhar oused by pulsations or mixing ma,
deadband. as indicated schematically in Fig. 8. The slope escess of say ~5%. It may not be desirable to hale a ilnal
varies from 0 to 1.0. At zero slop. the line between rhe operator respond to rhis noise via rhe proportional action
breaL.poinrs is horizontal. .Ar a slope of 1 .O. the nonlinear of the conrroller. Here. a nonlinear controller Gih a
characteristic is completely remoaed, and the controller deadband equal to the noise band of rhe ineasurcment
becomes a con\endonal linear controller. as show by can be used 10 sieady the controller’s ou~pur.
the dashed diagonal line in Fig. 8.
The effecti\-e proponional band, (P&E. birhin the Coming Swn
dead band is: The nest anicle in this CE FLEFFSHER will appear in
the issue of Feb. 6. 1984. and will cover the direr s) nrhc-
(P&E = VWSlopf sis conrroller and adaptive control.
where (PB)o is tie searing for the proporrional band on Skm Donclru. Fddn
the controller dial.

- References

The author

Nonlinaar controller serves as a


WP-action floatina controller Fia. 9
Direct-synthesis
and ad tive controls
A model consisting of the steady-state gain, deadtime and lag for an
actual process is the basis for such control systems. Damping and speed
of response of the controlled variable provide the tuning adjustments.

Paul C Badavas, The Foxh Co.

0 Direct-synthesis controllers (DSC) and adaptive The flow loop usually responds much faster than the
direct-synthesis controllers (.iDSC) can be used in corn- . temperacure loop. Hence, a standard PI controller is
position. temperature and vapor-pressure loops. They sufficient.
are also effective for plug-flow processes and solids- The outlet remperarure of the exchanger responds
transportation loops that tend to be dominated by dead- slowly to changes in steam flow becauseit takes time co
time. And they can be readily used as feedback-trim overcome the lags associated with transfer of heat to the
controllers in feedforward control schemes. product scream.
Direct-synthesis controllers provide a means for quan- Also. the rate of product flow affects the residence
tifying process information in a systematic and relatively time of the Ioop because at a given flow it takes a certain
simple way. amount of time to dispIace the product volume in the
Adaptive direct-synthesis controllers provide an op- tubes of the exchanger.
portunity to improve control further by adapting rhe Before designing a direct-synthesis controller for the
model pammeters on the basis of the measured process outlet temperature, we must Iirsc obtain a process model
variables. This employs additional information to char- for the loop that quantitifies the variation in ourlet
acterize process gain and dynamics. temperature with sream Row for a given product flow
Lt’e \\ill discuss the basic design of a DSC by using a before specifying its desired response.
hear-exchange process. This rv-ill also be used CO show
how an .UN: can be designed by adapting the parame- Process model
ters of the controller from the process variables. i%‘irh the temperature controller on manual, or the
‘Ihe direct-synthesis controller is designed by devel- flow controller on local set. a srep change is made to the
aping ;I process model to achieve a desired response for flow controller. and the response of the outlet tempeta-
a collrrollcd variable. In most cases, the parameters for lure. T2. is obsened. During the response time for rhis
the process model vat-y as a function of the measured experimenr, it is assumed that the load variable remains
variables. ‘[he. latter can then he used to continuously relatively constant.
adapt the model parameters to further improve the Fig. 1 b show a typical response. The time it takes for
response of the controlled variable to load upsets. the temperature to respond after a step change is iniriat-
ed is referred to as the deadrime, car.’ of the process.
Design of direct-synthesis controller The time it takes for the temperature to reach 63.2% of
in ihe heat-exchange process of Fig. la. the objective its final value from its starting point. but excluding
is to hear the product. wliich is fk><~in< at a rare f)f tt’> deadtime. is defined as the lag of the process, T,. The
and J[ an inlet temperature CJL’T,. to a tcmneracure 7,. h! ,tcady-irate gain of the loop. K. is obtained by dividing
manipulating StL’Jm floii. It’ . .A t y p i c a l method ii - the change in temperature by the change in steam flow.
.
US

or:
k’ = ~TJAW, (1)
:% here 1 T, is the steady-state change in outlet tempera-
ture. and 111’ is the change in steam Row, as shortn in
Fig. lb.
Stead>-state yain. deadtime and lag constitute the
0 Setpoint
TC -

Product. W,, 7,
-
Heat exchanger
%, TT Time-
* w
!l {I Damping ratio affects the
I :losed-loop response Fig. 2
t Condensate
a. Proceu

process model that is needed for designing the direct-


synthesis controller for the outlet temperature.
I 1
,’ Closed-loop response
t’
I’ Desired outlet-temperature responses to setpoint
Final ,’
changes when the temperature is under closed-loop
control (i.e.. temperature controller on autom
shown in Fig. Ic. Since the deadtime of the
cannot be speeded up or overcome, deadrime
desired closed-loop response is set equal to the deadtime
of the process.
The steady-state gain of the desired closed-loop re-
sponse must equal 1. This guarantees that the tempera-
ture is regulated at the desired setpoinr. and that it
follow setpoint and, hence, load changes without of&et.
In other words. the controlled variable returns to the
serpoint in the steady state, following load upsets or
serpoint changes.
The ratio of the desired closed-loop lag to process lag.
T/T~. is used to speed up the response. AS shown in Fig.
Time - lc. the faster the desired response, the smaller is the
b . Responre to step change ratio r/t,.
Possible responses for any control loop are shown in
Fig. 2. For an overdamped response. the temperature
slowly approaches but does not exceed the setpoint. .A
critically damped response means that the temperature
approaches as quickly as possible but does not overshoot
the setpoint. .Urhough an underdamped response eshib-
its cyclic behavior whose period is TV. the magnitude of
the periodic response decreases with time and, thus, the
loop remains stable. On the other hand. the under-
damped response has a unirorm oscillation of constant
amplitude and period.
For the underdamped response in Fig. Ic. the ampli-
tude of the second peak divided by that for the first peak
is wtmed the damping ratio. 4. or:
Time -
c. R~ponra under ciosed.lwp c o n t r o l 6 = AJ.4,
The smaller the damping ratio. the more damp, ,C
I Outlet-temperature control response.
I for a heat exchariger Fig. 1 The model for the DSC is used in a complcmcrn~~
1 I I 1 I ! t L I
0 1 2 3 4 5 6 7 8
Time, min
a. Overdamping

way. For example, the controller gain, Kc, is the inverse


of the process gain, K (i.e., Kc = I/K). Increasing or I t I t ! 1 t I I

decreasing the controller gain above the l/K value pro- 0 1 2 3 4 S 6 7 8


Time, min
duces the responses shown in Fig. 2. In essence. the
desired degree of damping is achieved by multipI!ing b. Underdamping

the controller gain by a damping gain, K,. Conrroller


gain is then redefined as:
Kc = K,/K (3)
7s - ,.X0-1.0
Damping gain is used co set rhe desired damping for 1

the response. For underdamped responses. it sets the


desired damping ratio. Of course, the damping gain can
be set low so as to obtain overdamped responses that
have no overshoot.
By varying the desired closed-loop lag r. rhe desired {
period of oscillation, re, is achieved. As the ratio of tie 1 2 3 4 5 6 7 3
desired closed-loop lag co the process lag. r,, is made Time, min
smaller, the faster will be [he response and rhe smaJler I c. Variable damping
will be the period of oscillation for the response r,-,.
Responses to a process having l-mm deadtime
Example: deadtime plus lag process plus I-min tag. and a process gain of 1 Fig. 3
A USC was applied to a process having a I-min dead-
time. I-min lag. and unity gain (i.e.. car = I min. r, = 1
min. and K = 1 .O). Load upset to rhe process was added can be achieved by varying the damping gain and the
to the controller’s output. The effects of the upset riere r/r, ratio. Of course, the process model for the simula-
recorded. and the responses are shown in Fig. 3. tion was exactly known. The following questions now
Speed of response as a function of the r/r, ratio is arise: How accunte must the model be? and, What are.
shown in Fig. 33. Process lag is 1.0 min. AS this ratio is the bounds of stability as far as the model parameters are
decreased from 1 .O co 0.1. closed-loop response CO the concerned?
load upset reaches rhe setpoint of 50% faster and fajcer.
In this case, the damping gain WJS set 10 K I, = 1 .r), 2nd Parameter sensitivity
aI1 responses were overdamped. The theoretical background for rhe DSC is given in Ref.
In order ho obtain underdamped responses. damping 1. as are the stability limits-a summary of which are
gain was increased IO K ,, = 1.73. Speed c,f re\ponse and given in Table I. For a pure deadtime process (the most
period ofoscilla[ion. ;,,. 3s jhot\n in Fiy. Sb. cJrcrcd>e as difficult to control), model deadrime can be as low as
the ratio T/T, decreases. Fr,r r/r, = 1.0. r9 = 3.1 min. jfJ’,G below. and as hiqh a s 233 above. the acrual
while for T/T, = O.!!i, T,, = 2.: minY process deadtime before uniform oscillation occurs. As
Responses co vari>hlc r!amptn;l <aIn. .h’,. arc’ ,hol*n in rhe Go of the process deadtime IO lag increases, these
F i g . jc. .& A’,, irlcrc-ax% the re,pcJn,c hccamcs less trahiliry hounds are relaxed even more.
damp&, i.e.. Ihe &w~l,irl;r fhuo aI\0 incrca~rs. t\.hen rhe deadtime is knoxn exactly, uniform o\cilla-
This example 5h,,t,\ rh;lt 3 family of dcrired recpon(es rion occurs c\ hen rhe model gain is 503 that of the actual
Outler~temoerature. 3?sirsd damping
I t Conrroller tuning
T,. setooint
Darired speed adjustmems
Directq~&asis \ r of response i
1 r
controller, 15s
Outlet
temperature. l Damping gain, Ko ’ Manipulated Controlled
T, . Speed-of.responsa variable. c
_ --v-e.

Process model :
l Steady-rue gain, K
l Deadtime. rOT

a. Process b. Controller

Main elements of a process and model for the direct-synthesis controller Fig. 4

gain of rhe pure deadtime process. This implies {hat 10 be used, rhe tuning parameters for the DSC would still
reasonable estimates of rhe model parameters will pro- be desired damping and speed of response.
vide sable control. The equation for rhe conrroller [I] is:

Design summary .\I” = P.~f”_, -I- (1 - P).~f,+l + K,fc”-, cle”_, (4)


The main elements of the direct-synthesis controller w h e r e K, = (1 - /3)/(X(1 - a], a = exp(-T/r,).
for the ourlet temperature of rhe heal exchanger of Fig. 1 P = exp(-T//r), c = r - c. r = serpoint. c = conrrolled
are: (1) the process model, consisring of the steady-srate variable, M = manipulated variable. T = sampling rime,
gain K. deadtime T Dr, and the lag 7,; and (2) the desired A’ = gain of rhe process model. TV = lag of rhe p
response. as chosen by the damping gain K,. and the model, rDr = deadrime of the process model, 7
desired rario for deadrime to process lag T/T,. These for the desired closed-loop response. S = nearest mr,-
procedures are summarized in Fig. ta. ger of r,,/T. n = current sample number, n--.1.-
Fig. -lb shows the direct-synthesis controller that con- 1 = sample number (S + 1) sampIe times ago.
rains the model of tie process implicirly, i.e., rhe conrrol- >ficroprocessor-based shared controllers have a rep-
ler complements rhe process. Fig. ib also shows sche- ertoire of well-defined control algorithms (or “blocks”)
maritally the MO tuning adjustments--desired damping that can be selected and configured by rhe user. These
and desired speed of response of rhe controlled variable. do not require special progamming.
These are the only tuning adjustments. and there is no In many processes, steady-stare gain, deadtime and lag
proportional. integral or derivative acrion to tune as is vary mainly as a function of variables that are measured.
true for a PID (proportional i integral + derkarive) For such siruaGons. the measured variables can be used
controller. Even if a more complex process model were to adapt the model parameters and, thus. provide further

OutleI~temperawt
se:ooint. T2
I

Adaptive direci-synthesis
controller, rose
Desired response: .
l Damping gain, Kn

. Speed.of-response Steam flow. .


ratio. T/T, *,
Process model:
l Sleadv-state gain, K
l Dead time. roT

l lag. r,

Characle&arion for K. r,,r, T,


d ' I 1 I 1
0 20 40 60 80 100 ;
Product flow. W,. % Product flow. VJ* 1
~~~~
1 Characterization of steady-state gain produces
Fig. 5
Adaptive-direct-synthesis-controller
parameters to adapt model gain, deadtime and lag relationships Fig. 6
contr()l improvemcnr. This is consistent l,.ith the phi[os-
ophv of using AS much knor*n information ;Ibout the Properties of common control loops Table Ii
process as possible in order IO improve its control.

Adaptive direct-synthesis control


In the heat-exchange example (Fig. I). the steady-stare
gain varies inversely with product flow. For instance, if
rhe product flow is cut in half. rhe temperature will
change twice as much for che same change in steam fiotv.
Deadrime also varies inversely with product Row because
rhe volume of the exchanger tubes is constant. As prod-
uct flow increases, it takes less time to displace the liquid
in the exchanger. Heat-exchanger tests show that the lag
of the process also varies inversely with product flow [J].
.A c!picrrl plor that characterizes the steadysrate gain
a~ a function of product flow is shocvn in Fig. j (cs.here
gain is expressed in multiples of the gain x full or 100%
product Hoc<). AS the product flow approaches zero, the
gain becomes infinite. To avoid division by zero (for zero
fb9. the gain is kept at a constant value, say K,,. up to
some chosen value of flow-in this case. 10%.
Characterizations of the process. deadtime and lag are
similar CO rhe characterization for gain because they also
vary inversely with flow. Thus, the process parameters
are obtained either by using a characcerizarion curve Feedfoncard-control systems are always based on
such 3s Fig. 3 or by computing them directly from rhe steady-srate and dynamic models. They compensate for
inverse relationship. In either case. the product flow is measured load variables on rhe basis of feedfordard
used 10 adapt the process-model gain. K. rhe deadtime, computations. These systems always have a feedback
r ,,r. and lag. 7,. trim controller co control the unmeasured load variables,
Moreover. speed of the desired response is also adapc- and co correct for inaccuracies in rhe feedforward model.
ed because the desired response lag is set in ratio co the The direct-svnrhesis controller can readily be used for
process lag. Since gain for the process model is adapted such feedback trim controllers because some under-
by the product flow, the damping gain (once set) pro- standing about the process model for the direct-sbnthe-
vides the desired damping regardless of product Row. sis controller already exists in the feedfotiard model.
The adapcise direct-sknchesis controller is the same as An application of the direct-skmhesis control!er is for
the direct-synthesis controller. with the addition of the plug-flow processes such as are found in the pulp-and-
process-model characterization as a function of product paper and aluminum industries. Other applications may
flow. and is summarized in Fig. 6. be found in the minerals-processing industries.
The next anide in this CE REFRESHER Hill appear in the
Where to use DSC and ADSC issue of Apr. 16. 1984.
Deadrime makes control more difficult. In particular. keven Danatos. Edi!or

the mtio of dradtime to lag for a process is a measure of


chc control dilficulty. The higher the ratio. the cougher Reference5
rhc control problem because no feedback informarion is
supplied to rhe controller during the deadrime portion
of the response.
.A [!-pica1 PID controller will have to be deruned in
order co remain stable for processes having a ratio of
deadtime to IJg geater than, say, 0.5 The direct-s)nthe-
sis colttroller can be tuned more tight+ because ii uses
the model of the process implicitly.
Table II lists the properties of common control lW)pq. The author
Obviously, the standard P I concrolicr ic sufficient for
liquid flow, level and prcsiure loops. and rhose ILr 735
pressure. These (oops are esscnriall: \inyle capacil:. . ‘jr
arc very fast if multiple capaciry.
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 4

New Concept in Cement Plant Control


F.L.SMIDTH Plant Services
Division

INTERNATIONAL CEMENT PRODUCTION


SEMINAR

LECTURE 13.1

NEW CONCEPT FOR

CEMENT PLANT CONTROL


NEW CONCEPTS FOR CEMENT PLANT CONTROL

1. Introduction

The introduction of new and advanced process computer and control systems on a cement plant lead to
improvements in all stages of the production process from selection of raw materials in the quarry to the
storage and dispatch of cement. Besides improving operating costs as energy consumption, raw material costs,
manpower and maintenance also external factors as pollution control and competition on product quality are
greatly influenced by the cement plant control system.

The prospected goal for introducing a modern control system is to centralize, stabilize and optimize the
complete process through automation, by real-time monitoring and control of quality and operating
parameters. This will, in return, lead to:

- Quality throughout the complete process


- Optimal raw material usage
- Minimum energy consumption
- Minimum power consumption
- Less impact on the environment
- More efficient maintenance
- Less required manpower
- Improved personnel skills and knowledge

Because every stage in the process is also greatly influenced by conditions and results upstream, it is
fundamental for all information to be available and accessible on a plant-wide basis. For that reason, the
success and benefits gained through automation will greatly depend on its implementation.

This paper will describe the individual elements of a modern control system for a cement plant, and it will
present FLS's proposed solutions.

2. General plant control

Modern general plant control systems provide for centralized control of all but a few areas of a cement plant.
Detailed information on every stage of the process is presented to the operator on CRT based operator
stations, whereas the actual control is performed through a network of microprocessors, typically one or more
for each production department.

A typical cement production line may cover 4-600 process signals and control loops (analog) and 3-4000
status signals (digital).

Many plants are still operating with control systems consisting of relay panels, hard wired interlockings,
control and mimic panels. Today, those systems are technically obsolete, may not function properly and are
hard to maintain and modify. They provide the operator with poor access and display of information and do
not permit centralization of the many local control rooms. Most plants are looking at replacing such systems
within reasonable cost and down time.

The reasons for implementation of a new control system are manyfold:


- Reduction in the number of operators.
- More reliable electrical system with positive influence on machinery down time and maintenance cost.
- More flexible, expandable and easier to modify by plant personnel.
- Possibilities for redundant (fail-safe) systems.
- Improved process understanding and control by the operators.
- Positive impact on the electrical staff which improves its skills and knowledge, and on the operators daily job
and behavior.

The following will describe some of the most important considerations in the implementation of a centralized
plant control system.

2.1 Operator stations

The operator workstations are the eyes of the operator into the process as well as his arms for carrying out
control actions. it is therefore of utmost importance that the capabilities of the operator stations allow the
operator effectively to monitor and control. Operator work stations for a cement plant should as a minimum
fulfill the following:

- screen update time:

It should take no longer than 1-2 seconds for the operator to change the screen picture from one part of the
process to another. It is also preferable that all normal selections are made in one key stroke, not through a
selection hiracy.

screen complexity

The system should allow for complex screen pictures, with up to 50 - 150 dynamic points in a graphic picture.
Experience has shown that operators, given the possibility, prefer to build complex graphics with all
information on a process department in only one picture. In an upset situation the operator will identify the
problem at once more by pattern recognition than by analysis.

process response

All events in the process either process changes or response to operator actions shall be displayed within 2
seconds. Many of the processes on a cement plant are slow, however during upset situations, start-up's or
shut-down's fast and correct actions are needed.

alarm reports

The availability of correctly time-stamped alarm lists covering both motor status and process values are
important for the operators analyses and correct action in an upset situation.

2.2 Sequence and interlocking control

80 – 90% of all control signals on a cement plant are associated with the control of motors. For the ease of
operation a modern plant control system is programmed with group starts in which the motors in a process
department or part of one will start in the correct sequence. Likewise in case of failure the interlocking system
will ensure a correct shut-down.
All information shall be available for the operator on the operator workstations for remote diagnostics.

The sequence and interlocking control must react within 0.2 seconds.

An important part of the alarm treatment is that the control system can be configured only to signal the cause
of an alarm situation as well as other important alarms and not the vast amount of unimportant alarms that
automatically follows a stop situations.

2.3 Regulation

Measurements and control loops on a cement plant are rather complex. The measurements are by nature very
fluctuating/imperfect and often the control variable is calculated based on one or more measurements.

The general plant control system must be able to handle complex treatment of signals and control loops and
with a scan frequency of down to 0.1 second.

2.4 Plant data base

The installation of a new plant control system provides the plant with a new and important amount of data. It
is often difficult to manage these data as to which are relevant to who. And their accessibility from all levels is
not always ensured.

FLS Automation offers a process computer based system for cement plant monitoring and control, the
FLS-SDR system. In the SDR system all monitoring and control modules are based on the
SDR/PlantDataBase (Fig 1).

The PlantDataBase allows to implement a central data base with access from all production and managing
levels.

The PlantDaseBase module will:

- improve the general plant information system with new data storage and reporting facilities.
- facilitate the implementation of data redundancy because of data centralization.
- improve the different control modules general performances because of direct access to relevant
information without relying on human inputs of data.
- become a powerful tool for production planning and budgeting, as well as general plant management.

2.5 Typical control system configurations

FLS Automation offers implementation of a modern plant control system as the FLS-ACE concept, Adaptable
Control Engineering. This concept is based on the use of the latest available technology in:

- Programmable Logic Controller (PLC) systems

- Operator supervision and control Workstations or


- Distributed Control Systems (DCS)
Fig 2. shows a PLC based control system with the following characteristics:

- One PLC for each process department handling both analog and digital control
- All PLC's are interconnected by a dualized data-highway or directly to the process computers
- Different PLC types may be incorporated in the integrated control system
- The operator level consists of a dualized process computer configuration with a total of 4 operator stations
- The PlantDataBase is residing in the process computers
- Optimization control is integrated in the process computers.

Fig 3. shows a DCS control system with the following characteristics:

- One microprocessor based controller for each process department handling both analog and digital control
- All controllers are interconnected by a dualized highway
- The 4 operator stations are connected in groups of two with the highway
- The PlantDataBase is partly residing in the DCS system, partly in a separate process computer
- Optimization control is performed in a separate process computer.

3. Plant optimization control

The optimization control may be divided into two areas:

- chemical quality control covering the production laboratory and those control tasks that are directly
associated with chemical analyses.

- process optimization covering the supervisory control of the different process departments, ie raw mill,
kiln, cooler and cement mill.

In both areas the control tasks can be defined as a high level automation utilizing specific cement knowledge
of the complex processes.

For chemical quality control, FLS Automation offers the FLS-QCX System primarily based on the utilization
of X-ray fluorescent analysis.

For process optimization, FLS Automation offers the FLS-SDR/FuzzyLogic system with modules for mill,
kiln and cooler control.

3.1 Chemical quality control

In order to perform all the quality controls for raw materials, raw meal, clinker, cements and additives, an
increased number of analyses must be carried out with correct and precise procedures to ensure good and
representative results and thereby lead to:

- improved general quality controls and procedures.


- improved short and long term product quality.
- improved awareness and knowledge of laboratory staff on quality requirements and implications.
However, a thorough quality control program will increase the laboratory workload considerable and more and
more plants implement several or all of the steps by fully automating the laboratory.

FLS Automation offers the following modules for automating the cement plant laboratory based on the
FLS-QCX System. The different modules may be used to implement stepwise automated laboratory
procedures:

- QCX/AutoSampling module for use of automatic and continuous samplers, for all materials, and
pneumatic tube transport systems to deliver samples more rapidly and consistently to the laboratory.

- QCX/ManuPrep, QCX/AutoPrep or QCX/RoboLab modules for use of semi or fully automatic sample
preparation equipment for powder and fusion methods, with manual handling or automatic handling of
samples and equipment/instruments by an industrial robot (Fig 4).

- QCX/Laboratory module for use of analytical instruments such as X-ray spectrometer, X-ray
diffractometer, particle size analyser, surface area analyser, atomic absorption analyser, C02/CO3 analyser,
concrete testing equipment, etc and reporting of all relevant analytical results.

3.1.1 Raw mix control

Producing good cement requires continuous quality control along the complete process. Those must start in
the quarry with the selection of correct raw materials from the fronts and the supervision of the
stock-piles/silos build-up in order to obtain the right global chemical composition.

The proportioning control of the different raw materials fed to the raw mill is the most important control in
order to avoid fluctuations downstream in the process.

The aim of the chemical quality control of the stock-pile and raw mill is to ensure:

- uniform chemical composition of raw meal with less than 1% LSF variation in the kiln feed
- ensure the best burning conditions and clinker quality, and improve the kiln stability.

FLS Automation offers the:

-QCX/Quarry module for quarry planning based on drill hole samples

-QCX/Pile module for stock-pile build-up control (Fig 5)

-QCX/Proportioner module for raw mix control (Fig 6)

The Proportioner module will automatically control and optimize the raw meal chemical composition. It will
minimize the variation in composition to facilitate the task of the homogenizing system. Based on periodical
analyses of raw meal samples, corrective actions will be performed on-line by changing the stock-piles/silos
feeder setpoints. A direct-search optimizing algorithm ensures the best possible control under the constraints
of operational and economical conditions.
3.1.2 Cement

Different cement types are provided to the customers, each having to meet special chemical requirements. To
obtain good quality cement for the different types, the proportioning of clinker and additives must be
supervised very closely taking into consideration the use of low cost additives instead of high cost clinker.

The QCX/Proportioner module for cement will automatically control and optimize the cement chemical
composition. Based on periodical analyses of cement samples, corrective actions will be performed on-line by
changing the clinker and additives feeder setpoints. A special recipe function enables easy change from one
cement quality to another.

3.2 Process optimization

For process optimization control, FLS Automation offers the FuzzyLogic system as a module to the FLS-SDR
System.

All the FuzzyLogic II modules for automatic control are based on a second generation expert shell which
enables to duplicate human way of reasoning and to elaborate control strategies with different degrees of
complexity and priority. These modules will perform overall and consistent evaluations of process conditions,
and execute adequate control actions on a more frequent and reliable basis than human operators. They are an
open tool which allows to implement solutions specifically tailored to the needs of each plant by incorporating
the best control knowledge of all. They will assist the operators on their control tasks on a 24 hour basis, day
after day.

3.2.1 Kiln operation

Even if it appears easy to understand the kiln process and to produce clinker, it takes a lot more to obtain good
quality clinker taking into consideration the kiln stability, fuel efficiency, maximum output and protection of
the mechanical equipment. Indeed, the kiln process is affected by internal and external disturbances such as
changing kiln control by the different operators, changes in kiln feed quality, kiln feed preparation and fuel
quality, cooler instability and conditions of coating, which all contributes to a difficult control.

The FuzzyLogic II automatic kiln control module (Fig 7) is used when the kiln has reached approximately
70% of its capacity and based on regular analysis of the clinker quality, it will perform, during stable and most
of the unstable conditions, proper on-line control actions on the fuel and feed rate, and the kiln and ID fan
speed in order to stabilize and optimize production.

The FuzzyLogic II automatic kiln control module considerably improves the kiln stability controlling the kiln
during 80-100% of the operating time.

In return, the improved kiln stability will:

- bring savings on fuel (3-5%) and refractory (30-50%)


- give more uniform clinker quality with better cement strengths and decreased specific power consumption
in the cement mill
- increase run factor and production rate (up to 5%).
3.2.2 Kiln start

The kiln start-up is a critical phase in the kiln operation with considerable risks for damage to the kiln and the
refractory. This period often accounts for a considerable production loss as the clinker produced is not always
of good quality, and it may lead to kiln instability during the first few days.

The FuzzyLogic II automatic kiln start-up module allows to perform automatic control of the kiln from the
time the feed has been applied and will automatically switch to the kiln control module when the nominal
production has been reached.

It will perform control actions to increase fuel rate, feed rate and kiln speed according to the thermal load and
with regards to avoid thermal stress to the refractory and the shell. This results in a fast and safe start-up
phase.

3.2.3 Cooler operation

The grate couler behavior influences greatly the burning zone and overall kiln stability, as well as the kiln
system heat efficiency. It is therefore of utmost importance to maintain stability of the cooler, taking into
consideration the clinker exit temperature, secondary or tertiary air temperatures, cooler efficiency and heat
losses.

The FuzzyLogic II automatic cooler control (Fig 8) will:

- result in a greater cooler stability which in return will influence the kiln stability
- increase the cooler efficiency with stable and higher secondary and tertiary air temperatures
- optimize the amount of cooling air
- improve the mechanical protection of the cooler and its auxiliaries.

3.2.3 Cement mill control

The quality of the cement produced depends not only on its chemical composition but also on its strength
values, which are affected by the fineness of the product. As the cement mill department is the biggest
electrical consumer in the plant, the cement production cost will also be decreased by maximizing the cement
mill grinding efficiency.

The FuzzyLogic II automatic cement mill control (Fig 9) is based on the same expert shell as for the automatic
kiln control. Based on regular quality controls on the cement fineness, it will perform proper on-line control
actions on the separator speed and the mill feed to achieve correct cement quality and maximize the
production.

The Fuzzy Logic II automatic cement mill control will:

- result in improved st rength of the cement produced


- result in more uniform quality of the cement produced
- increase the grinding capacity of the mill because of lower recycling rates and continual maximum feed.

3.3 Electrical energy control


Managing electrical energy is an important key to overall profitability, as the power cost in many cases
represents almost 50% of the total energy costs. The use of electricity should therefore be minimized, but it is
a difficult task in the production planning to take into account all the relevant factors to evaluate power cost,
power demand and power consumption.

FLS Automation offers an electrical energy control system as a module to the FLS-SDR system, the
SDR/PowerGuide.

The PowerGuide module is an effective power management program based on production


planning/scheduling, and supervision and control of all energy-related variables at the plant. It can perform
complex calculations on power cost, power planning and power control, and advise on or execute on-line
start/stop of secondary machinery at the most favorable periods.

Depending on the local electrical tarif structure substantial savings may result from the implementation of a
power planning system.

3.4 Refractory control

The kiln and the refractory represent a high cost in investment and maintenance, and it is the responsibility of
the plant personnel to ensure proper kiln shell safety and refractory durability. The kiln run factor will also be
directly influenced by the numbers of stops for rebricking.

FLS Automation offers a kiln refractory control system as a module to the FLS-SDR system, the
SDR/CemScanner (Fig 10).

The CemScanner module performs on-line measurements of kiln shell temperatures through high speed and
state-of-the-art infrared line scanning techniques. It provides information on overall temperature profile, lining
wear, uniformity of protective coating, geometry of rings formation and hot spot developments.

In the short-term, the CemScanner module will result in:

- better protection of the kiln shell and refractory against thermal overload
- early detection of hot spot and counteraction against its development with external cooling
- improved follow-up of rings formation in the kiln.

and in the long-term, the CemScanner module will result in:

- reduced consumption of refractory.


- reduced kiln stops because of hot spots.
- improved refractory management and kiln down-time because of better planning of kiln stops and area to
be rebricked.

3.5 Management information and control


With a modern control system including optimization control management has gained a powerful tool for
further optimizing the overall plant performance.

Based on the SDR/PlantDataBase reports can be accessed from all production and managing levels. Advanced
data management programs allows for sorting, displaying and reporting of the available information,
according to plant departments and demands, and becomes a powerful tool for production planning, budgeting
and general plant management.

The plant optimization control enhances management influence on the actual plant control as consistent
control philosophies can be implemented independent of changing operators and their performance.

The integrated control- system facilitates changes in operation by implementation of recipe functions by which
groups of control programs, control loops etc may be changed with one command. A fast and safe transition
from one mode of operation to another greatly influences the operating costs.

In the laboratory the introduction of laboratory information management sy stems (LIMS) has started
integration of all functions in the laboratory from chemical analysis and grain size distribution to physical
strength test.

4. Conclusion

The part of the investments devoted by the cement industry for the installation of modern computer and
control systems will be continuously growing in the 90's. Because the size of the investments is non-negligible
and because the profits gained can vary tremendously, the implementation of the systems must be carefully
studied to avoid "islands" of automation. A long-term, well defined and coherent strategy for a step by step
implementation will prove to bring maximum results.

The benefits gained through automation in terms of Return On Investment can sometimes be difficult to
evaluate. If it 'is easy to give such figures when fuel, refractory or manpower savings are involved, it is almost
impossible to put a price on total quality and customer satisfaction, but in a world of increasing competition
and import/export, can today's managers still take the risk for low quality products?.

The future developments in computer and control systems, online instrumentation and modern technology, as
we can already see today, will tend to link even furthermore all stages of production with a more continuous
process and shorter span-time, from the quarry to the cement silos. To follow this trend and guarantee optimal
results, the use of a central plant data base, to which all automation systems are connected, proves to be
necessary. And it will become a new, powerful and indispensable tool for tomorrow production planning and
general plant management.

In the 80's, FLS has provided modern computer and control systems to the operators. In the 90’s, FLS will
bring them to their managers desk.
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 5

Modernization of Control Systems in


Cement Plants
MODERNISATION OF CONTROL SYSTEMS IN CEMENT PLANTS

SUMMARY

1 - INTRODUCTION

2 - WHY MODERNISE

3 - WHERE DO WE WANT TO GO

4 - HOW TO DO IT

5 - EXAMPLES

6 - FUTURE STEPS

7 - CONCLUSION
1 INTRODUCTION

Modern cement producing industry faces increasing demands on product quality, pollution control as well as
for lower operating cost. It is of utmost importance to minimize operating cost such as manpower,
maintenance and consumption of fuel and power.

Therefore we, today, see a trend in the cement industry towards modernization of existing control systems. In
this presentation we will discuss: - why that is - what is the goal - how do we actually do it - examples of
modernization - control systems in the cement industry in the 90's.

2 WHY MODERNIZE

Many plants are still operating with old control systems consisting of relay panels, hard wired interlockings,
control and mimic panels. Today those systems are obsolete, do not function properly and are hard to modify.
They provide the operator with poor access and display of information and do not permit centralization of the
many control rooms. Therefore many plants are looking at replacing such systems within reasonable cost and
downtime.

The reasons for implementation of a new centralized controlsystem are:

- Replace local control whith central control thereby reducing the number of operators.

- More reliable hardware with positive influence on machinery downtime and maintenance cost.

- Obtain programmable control system, which can easy be adapted to changes in process and mechanical e-
quipment

- Improved process understanding and control by the operators

- Possibility for redundant (fail safe) systems

- Positive impact on the electrical staff, which improves its skills and knowledge, and on the operators daily
job and behavior.

3 WHERE DO WE WANT TO GO

Implement a modern general plant control system. In FLS Automation called the FLS ACE concept. This con-
cept is based on the use of the latest technology in:

- Programmable controller PLC/microprocessor system


- Operator supervision and control Workstations

The characteristics of such a system is:

- One PLC for each department handling both analog and digital control
- All PLC's interconnected by a dualized data-highway
- The operator level consists of a dualized process computer configuration with a total of 4 operator stations
Options to the the system are:

- Process optimization control


- Chemical quality control
- Electrical energy control
- Management information and control

The performance of the blocks in the system should be:

Operator stations

The operator work stations are the eyes of the operator into the process as well as his arms for carrying
out control actions. It is therefore of utmost importance that the capabilities of the operator stations
allow the operator effectively to monitor and control. Operator work stations for a cement plant should fulfill
the following.

Screen update time


It should take no longer than 1-2 seconds for the operator to change the screen picture from one part of the
process to another. It is also preferably that all normal selections are made in one keystroke, not through a
selection hierarchy.

Screen complexity
The system should allow for very complex screen pictures, with up to 50 -150 dynamic points in a graphic
picture. Experience has shown that operators, given the possibility, prefer to built complex graphics with all
informations of a process department in only one picture. In an upset situation the operator will identify the
problem at once more by pattern recognition than by analysis.

Process response
All events in the process either process changes or response to operator actions shall be displayed within
2 seconds. Many of the processes on a cement plant are slow, however during upset situations, start-up's or
shut-down's fast and correct actions are needed.

Alarm reports
The availability of correctly time-stamped alarm lists covering both motor status and process variables are
important for the analysis and correction of upset situations.

Programmable logic controllers (PLC's)

Sequence and interlocking control


80 - 90% of all control signals on a cement works are associated with the control of motors. For ease of o-
peration a general modern control system is programmed with group starts in which the motors in a process
department or part of one will start in the correct sequence. Likewise in case of failure the interlocking system
will ensure a correct shut-down.
All information shall be available for the operator on the Workstations for remote diagnostics.

The sequence and interlocking control must react within 0.2 second.

Analog measurement and Loop control


Measurement and control loops on a cement plant can be complex. The measurement are by nature very
fluctuating/imperfect and often the control variable are calculated based on one or more measurements.

The general plant control system must be able to handle complex treatment of signals and control loops and
with a scan frequency of not more than 0.5 second.

Plant Database
The installation of a new general plant control system provides the plant with a new and important amount of
data. It is often difficult to manage these data as to which are relevant to who. And their accessibility
from all levels is not always assured.

FLS Automation offers a process computer based system for cement plant monitoring and control, the
FLS-SDR system. In the SDR system all monitoring and control modules are based on the SDR/Plant Data
Base.

The Plant Data Base allows to implement a central data base with access from all production and managing
levels.

The Plant DataBase module will:


- improve the general plant information system with new data storage and reporting facilities.

- facilitate the implementation of data redundancy because of data centralization.

- improve the different control modules general performances because of direct access to relevant
information without relying on human inputs of data.

- become a powerful tool for production planning and budgeting as well as general plant manage-
ment.

4 HOW TO DO IT

The budgettory cost for a modernization can be made on the basis of a flowsheet and discussions with the
plant, but in order to make a precise quotation some of the following steps has to be executed.

Consulting
An analyze of the existing control system

Project engineering
Basic engineering for the new control system
Integration
Implementation of basic engineering on the control system

Field services
Education, Installation, Commissioning

In execution of a control system modernization it is an advantage for the customer and the project that the
customer participate in the different fases in the project, the customer involvement being decided by the
customer organization and available resources.

Some examples of customer involvement in FLS executed modernization projects will be given.

It is although important to keep in mind that the work described in the following has to be done either by the
customer or the supplier.

Consultancy

Plant visit
An experienced FLS control system engineer will during the plant visit go through the existing installations,
and through discussions with the client decide which part of the control system the power system and the field
instrumentation should be modified or replaced.

Plant report
Based on the information gathered during the plant visit, FLS will produce a plant visit report with details of
the necessary scope of supply for a new control system. This report can be used as a tender basis for obtaining
quotations from both FLS and other suppliers of control systems.

System evaluation
The plant report will also contain a description of a proposed configuration for a new process control system.
The configuration will of course reflect the configuration which the client and FLS find most suitable for
complying with the requirements of the modified system.

Project planning
Further to the above, the plant report will contain a planning schedule for executing the engineering and
installation of the new control system. The proposed plan for the project will have been discussed with the
client during the plan visit.

Quotation
FLS will, if so desired, work out a proposal for the control system. The proposal may very well be divided
into steps, so that the new total control system can be purchased and installed over say 3 years.

As FLS Automation is a hardware independent company, then the proposal will be based on the hardware pre-
ferred by the client.

Project engineering

Project engineering is the engineering and information basis for configuration and programming of the control
system.
Program engineering
Program engineering describes operating procedures, start sequence, stop sequence, alarm philosophy, built
up of color displays, all discussed and agreed upon with the client.

Instrumentation/control loops.
Based on existing plant documentation and mainly information gathered during plant visits, FLS will produce
instrumentation lists and control loop specifications.

Motor control/signals
Based on existing plant documentation and mainly information gathered during plant visits, FLS will produce
signal lists and interlocking information.

Integration

Operator communication
Operator communication (man machine interface), which is the way the operator controls and monitors events
in the plant, is agreed upon with the client, taking into account the possibilities in the chosen hardware.

System configuration
The chosen hardware is configured in such a way that an optimum split-up of hardware and software is achie-
ved, taking into account plant layout and signal grouping.

PLC programming
The PLC's are programmed on the basis of the project engineering, which consist of:
- Signal lists (analog and digital)
- Interlocking specifications
- control loop specifications
- program engineering

System test A full scale test of the control system is performed, enabling the engineers to de-bug the system
before installation in order to make downtime in the plant as short as possible. Client participation in this is
very valuable.

Control room layout


If desired, FLS will, based on FLS experience, work out a detailed proposed layout for the control room,
taking into account location of color monitors, printers and, if installed, back up instrumentation, in order to
create a suitable working environment for the operators.

Field service

Unit and loop diagrams


Unit and loop diagrams show all external connections such as motors, monitors, valves, transducers and in-
struments. Each drawing treats one single unit or loop and forms the basis for the electrical erection work and
fault finding external to the control system itself.
Commissioning
FLS engineers who have participated in the engineering and testing of the control system will assist the client
in commissioning the control system.

Training
FLS will train the client's maintenance staff and operators in engineering, maintenance and operation of the
system. Part of the training can be "on the job" where the client's personnel participate in engineering and
testing of the control system.

Erection - supervision
FLS can place an engineer at the client's disposal for supervising the erection of the control system as well as
erection of motors and MCC's, if any.

Turn key
FLS can also undertake turn key responsibility, consisting of a total supply of:
- hardware, software
- installation, erection
- commissioning, training

5 EXAMPLES

5 examples of modernization projects will be discussed.

6 FUTURE STEPS

The next step could be to implement modules for:

- Process automation control


- Chemical quality control
- Refractory control
- Electrical energy control
- Management information and control

Process automation control This control task could be defined as a high level automation utilizing specific
cement knowledge of the complex processes. For process optimization control FLS automation offers the
FuzzyLogic system as a module to the SDR system.

The FuzzyLogic modules for automatic control is able to duplicate human way of reasoning and to elaborate
control strategies with different degrees of complexity and priority.

IT is an open tool which allows to implement solutions specially tailored to the needs of each plant by in-
corporating the best control knowledge of all. It will assist the operators on their control task on a 24 hour
basis day after day.
FuzzyLogic systems are implemented on:
- Kiln opration
-Cooler operation
- Cement mill control

Chemical quality control

In order to perform all the quality controls for raw materials, raw meal, clinker, cements and additives, an
increased number of analyses must be carried out with correct and precise procedures to ensure good and
representative results and thereby lead to:

- Improved general quality controls and procedures.


- Improved short and long term product quality
- Improved quality of raw mix and cement

To achieve this FLS Automation offers the following modules based on the FLS QCX system:

- QCX/AutoSampling
- QCX/ManuPrep, QCX/AutoPrep or QCX/RoboLab
- QCX/Proportioner for raw mix and cement

Refractory control
The Kiln and the refractory represent a high cost investment and maintenance, and it is the responsibility
of the plant personnel to ensure proper kiln shell safety and refractory durability. The Kiln run factor
will also be directly influenced by the numbers of stops for rebricking.

FLS Automation offers a kiln refractoty controlsystem as a module to the FLS-SDR system, the
SDR/CemScanner.

The CemScanner module performs on-line measurements of kiln shell temperatures through high speed and
state-of-the-art infrared line scanning techniques. It provides information on overall temperature profile, li-
ning wear, uniformity of protective coating, geometry of rings formation and hot spots development.

Electrical energy control

Managing electrical energy is an important key to overall profitability, as the power cost in many cases
represents almost 50% of the total energy costs. The use of electricity should therefore be minimized, but it is
a difficult task in the production planning to take into account all the relevant factors to evaluate power cost,
power demand and power consumption.

FLS Automaton offers an electrical energy control system as a module to the FLS-SDR system, the
SDR/PowerGuide.

The PowerGuide module is an effective power management program based on production


planning/scheduling, and supervision and control of all energy-related variables at the plant. it can perform
complex calculations on power cost, power planning and power control, and advise on or execute on-lone
start/stop of secondary machinery at the most favorable periods.

Depending on the local electrical tarif structure substantial savings may result from the implementation of a
power planning system.

Management information and control

With a modern control system including optimization control management has gained a powerful tool for
further optimizing the overall performance.

Based on the SDR/PlantDataBase reports can be accessed from all production and managing levels. Advanced
data management programs allows for sorting, displaying and reporting of the available information,
according to plant departments and demands, and becomes a powerful tool for production planning, budgeting
and general plan management.

7 CONCLUSION

The part of the investment devoted by the cement industry for the installation of modern computer and control
systems will be continuously growing in the 90's. Because the size of the investment is non-negligible and
because the profits gained can vary tremendously, the implementation of the systems must be carefully
studied to avoid "islands" of automation. A long-term, well defined and coherent strategy for a step by step
implementation will prove to bring maximum results.

The benefits gained though automation in terms of Return On Investment can sometimes be difficult to eva-
luate. If it is easy to give such figures when fuel, refractory or manpower savings are involved, it is almost
impossible to put a price on total quality and customer satisfaction, but in a world of increasing competition
and import/export, can-today's managers still take the risk for low quality products?.

The future developments in computer and control systems, on-line instrumentation and modern technology, as
we can already see today, will tend to link even furthermore all stages of production with a more continuous
process and shorter span-time, from the quarry to the cement silos. To follow this trend and guarantee optimal
results, the use of a central plant data base, to which all automation systems are connected, proves to be
necessary. And it will become a new, powerful and indispensable tool for tomorrow production planning and
general plant management.

In the 80's, FLS has provided modern computer and control systems to the operators. In the 90's, FLS will
bring them to their managers desk.
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 6

Basic Concepts for Feedback Control


BASIC CONCEPTS FOR FEEDBACK CONTROL
(Feedback Control in the Time Domain)

Much of our analysis will be in the time domain (as opposed to the frequency domain), While frequency
domain techniques have their various methods, i.e. Bode Diagrams, transfer functions, Nyquist plots, etc., to
be able to implement them, one should have a working knowledge of LaPlace transforms in most cases. Even
so, these frequency domain tactics may not always be appropriate to the Process Control problem at band.

In the time domain, we encounter the same type of equations describing our system as we did in the frequency
domain. However, an advantage with time domain methods is that they may be somewhat more easily
implemented in the real process, i.e. one can get a "feel" for time domain phenomena-by observing it on some
type of display device.

Lets consider a process which may be manufacturing some product. Attached to this process is some final
actuator which controls the flow of mass or energy into the process.

The question is; why do we need process control? For indeed, once we set the position of the final actuator to
give us the desired product out, there really is no reason why the final actuator should be repositioned once it
is initially sets (this might be referred to as open loop process control). This statement is true for the process
shown, however, the process shown may not be indicative of a real process. If it were, there would be no need
for any type of control applied to this process. A more realistic example should be considered:

Shown in Figure 2 is a process which is subject to load upsets (q). A load upset, sometimes referred to as a
transient upset, may be defined as any upset (except an input mass or energy upset) which may change the
quality of the product from the desired value.
This load or transient upset is contrasted to the supply upset, which is an upset to the input mass or energy to
the process.- We might make a general statement concerning the load upset. In many cases, however not all,
we don't know bow severe, or the time of occurrence of, the transient upset. All we know is that it will occur.
Due to the random nature of the load upset, it may not be convenient to station an operator to reposition the
final actuator to bring the quality of the product back to the desired valve each time a load upset occurs, thus
the need for some type of closed loop Automatic Process Control.

Perhaps the simplest and most widely used method of process control is the Feedback Control Loop, It is as
shown below:

Note that we still have the process and the final actuator as before, however we have now added two other
components to the process to form the feedback control loop. With the addition of the transmitter and
controller we now hope to maintain the quality of the product (sometimes referred to as the dynamic variable)
constant with load upsets, This is the reason for the feedback control loop: TO MAINTAIN THE VALUE OF
THE DYNAMIC VARIABLE (product) AT A DESIRED VALUE IN THE PRESENCE OF CERTAIN
LOAD UPSETS. The dynamic variable can more fully be defined as a variable (such as temperature, flow,
composition, level, etc.) whose value may vary due to upsets occurring and affecting the process.

With the feedback control loop, we desire to maintain this dynamic variable at a desired value (the set point,
shown as r. The set point is an adjustment on the controller which represents the desired value of the dynamic
variable. Because the dynamic variable is different with each process, the reference, r, or set point value, may
represent a desired temperature in degrees, a flow in gallons per minute, a level in feet or inches or a variety of
other variables.

The value of the dynamic variable needs to be sensed with some primary sensor at the process and then to be
transmitted in the correct format to the controller which may be in another part of the plant. This job is
accomplished by the transmitter. The transmitter receives some type of signal from the primary sensor and
translates it to a language which the controller can understand. The output of the transmitter is sometimes
called the controlled variable (shown as c in the figure), or the measurement.
The measurement is received by the controller. The controller takes the value of the measurements compares it
to the set point, r. in the section of the controller called the summer and sends the summer output (called the
error, e = r-c) as an input to the Control Algorithm. Depending on the type of algorithm in the controller, the
sign (+ or -) and magnitude of the error, the controller produces an output, m (sometimes referred to as the
manipulated variable) which goes as an input to the final actuator.

If a load upset has occurred, such that the error is not zero, i.e. the measurement doesn't equal the set point, the
output of the controller, m. repositions the final actuator to alter the flow of mass or energy into the process
which then returns the value of the dynamic variable back to the desired value, r.

Now that we have somewhat superficially examined the operation of a feedback control loop, we might
formulate several steps which define the operation of such a loop:

l.. Observation of a desired parameter

2. Comparison with a desired value

3. Performance of the necessary action to bring the parameter to the desired value

The above steps define the operation of a feedback control loop, regardless of whether the controller is a
person or a machine.

Now, we might consider how the controller returns the measurement back to the set point (step 3, above). This
requires us to investigate the dynamic response of the loop. Dynamic response can be defined as a measure of
the performance of the dynamic variables return back to the set point, due to an upset.

Up until now we have only been concerned with load upsets, There are several other types of upsets which
may concern us. The supply upset, mentioned earlier, will be dealt with when considering cascaded systems;
and the set point change. A set point change represents a change in the desired value of the dynamic variable
and can be viewed as an upset. Whether the upset is a load or set point change, we can investigate both, e.g.

As observed above numerous responses are possible in returning the measurement to the set point C1 in each
case above would be considered a case of overdamping, i.e. a slow, sluggish return to set point. C2 is a case of
critical damping, i.e. the fastest return to set point without oscillation. C3 is a case where there is oscillation,
and C5 is instability.

It is possible to adjust the feedback control loop to give any of the above responses. The type of response
desired depends on the process being controlled. For the most part, responses C1 – C3 would probably be
desired since each results in stability of the dynamic variable. C4 is useful in some cases for adjusting the
controller. Several methods developed for adjusting controllers for a given response depend on information
gained from the uniformly oscillating loops. C5 results in instability and is not desirable for control.

The type of response we obtain in our control loop depends on several factors we will now consider.
Type of feedback

Up until now we have been calling our control loop a feedback control loop. We learn now that there are two
types of feedback with which we will concern ourselves, positive and negative feedback.

Positive Feedback can be defined as that control action which will reinforce the movement of the dynamic
variable e.g. if we consider an air conditioned room with the thermostat set at some given temperature.
Whenever the temperature the room drops below the thermostat setting, the air conditioner turns on to the
room. Whenever the room temperature exceeds the thermostat setting, the cooling turns off. This example of
positive feedback is clearly undesirable, resulting in saturation at a very high or low value of the dynamic
variable, and in most cases is not suitable for control.

Negative Feedback is defined as that control action which does not reinforce the movement of the dynamic
variable. Considering again our previous example, whenever the room temperature rises above the thermostat
setting the cooling unit turns on, it turns off whenever the room temperature drops below the setting. This case
of negative feedback is the desired action resulting in control of the dynamic variable.
Negative and positive feedback are sometimes referred to as Increase/Decrease and Increase/Increase action
respectively. This arises from the consideration that with negative feedback, whenever there is an Increase in
the measurement there is a corresponding Decrease in the input to the process (or vice versa), resulting in
stable control. With positive feedback, an Increase in the measurement causes a corresponding Increase in
input to the processt resulting in saturation of the measurement and no control. Increise/Decrease action is
sometimes referred to as Reverse Acting; and Increase/Increase action is sometimes referred to as Direct
acting.

This idea of Direct and Reverse action in a loop is not our only consideration. Every component in our control
loop has either Increase/Increase or Increase/ Decrease action.

Every component in this loops i.e. the transmitter, the controller, the final actuator, and the process iz either
direct or reverse acting.

As a general statement, first of all, we might say that for overall negative feedback action, we require that the
entire loop be Increase/Decrease or reverse acting.

Now let's consider a general component having I/I action, we get

Note, disregarding relative amplitudes between input and output, we see that if we have an increasing or
decreasing step input we will get the corresponding increasing or decreasing step output.
Consider a general I/D component:

Note that if we put in an increasing or decreasing step, we get a resulting decreasing or increasing step output
from this Increase/Decrease Component.

Let's consider some of these components in series:

If we have several I/I components in series the resulting action is also I/I.

Consider:

Note however if we place one I/D component anywhere in the series string we see
Consider also:

Note here that with two I/D blocks in series there is an overall I/I action. It can further be shown that whenever
there are an even number of I/D blocks in series the overall effect is 1/1, also whenever there is an odd number
of I/D blocks, the overall action is I/D.

Let's now investigate the action of each component to see if we can determine whether its action is I/I or I/D

Transmitter:
for its effective use as a transmitter in a control loop, the signal input to a transmitter must be faithfully
reproduced and sent to the controller. For the majority of uses, an increasing input gives an increasing output,
therefore the transmitter is I/I. (Note: There are some special cases where a transmitter may be reverse acting
but this is generally not the case, even if it were, we will see as we continue how this would be handled.)

Process: usually is I/I or direct acting.

Let's consider two major types of processes and the process input affect on its output.

Mass flow process: e.g. a level tank

We can see that an increase to the input (Fi) causes an increase in the level
(output).
Energy Flow Process

Note that with increasing fuel to gas jets, we get an increasing temperature, ∴I/I.

However, a process might also be I/D or reverse acting. e.g.

Here we have a reactor where some feed and catalyst are mixed together. In their chemical reaction, they
generate beat. If we consider the rising temperature from this generated beat as the process output and the cold
water flow to the reactor jacket as the process input, we have a reverse acting process, since, if we increase
cold water flow, we get a decrease in reaction temperature.

Final Actuator: A final actuator may be almost anything which can control the flow of mass or energy into a
process. It may be a motor speed control, a abutter on a bin, etc. However, about 90% of all final actuators are
valves. We should spend some time in considering the action of a control valve.

This is a manual control valve, as the position of the valve is increased the flow through the valve also
increases. This is I/I action.
This is a valve with an air-to-open actuator. Since many valve actuators in process control applications may be
pneumatically activated, air-to-open means that with an increasing air signal to the actuator the stroke and
therefore the flow through the valve increases. This is considered I/I or direct acting also.

This is a valve with an air-to-close actuator. This means that with an increasing air signal to the actuator the
valve closes and flow decreases. This is considered to be I/D or reverse acting. Once the actuator for a
particular application is chosen, it usually remains fixed. Therefore, in the case of the final actuator being a
valve, it may have either I/I or I/D action depending on-the actuator chosen for the valve.

Controller: The controller has an adjustment on it which will allow either I/I or I/D action. This is important as
we will now see.

Given the loop:

We assigned action as discussed previously to all components except the controller, assuming an air-to-open
actuator,
We see that the valve, process, and transmitter are all direct acting therefore from our previous discussion we
recognize that in order to get negative feedback action (I/P loop action) the controller must be set to I/D.

Consider now an air-to-close actuator

Here we require I/I action in the controller to get negative feedback.

Gain
Once we ensure that we have correct feedback action in our loop, we will discover that the type of dynamic
response we obtain is solely dependent on loop gain and loop phase shift. Let's define gain first.
∆ out
G=
∆ in

Where the gain, G is defined as a change in output amplitude divided by a change in input amplitude of a
device.

We will be able to define a gain for each component of our loop.

Phase Shift (φ)

Phase shift or phase angle, φ is the difference in phase between the input signal and the output signal.
Depending on the device this angle may be a phase lead (positive angle) or a phase lag (negative angle).
This is a phase lag. i.e. the output is a negative phase angle with respect to the input

e. g.

This is a phase lead where the output is a positive phase angle with the input.

The positive phase angle, or phase lead is most commonly encountered in electronic circuits. With one or two
exceptions, we will be dealing entirely with phase lags, or negative phase angles.

Dynamic Response Based on the Gain Function

If every component in our control loop is defined by a particular gain we can now define a gain for the entire
loop. This gain is called the loop gain (short for open loop gain) it is simply defined, as the product of the
component gains.

i.e. GL = GT × GC × GV × GP

where GL = gain of the loop


GT = gain of the transmitter
GC = gain of the controller
GV = gain of the valve
GP = gain of the process
and if each of these gains is of the form

G X = G/ φ

Then the loop gain may be rewritten as

G L = G L / φ L = G T / φT × G C / φC × G V / φV × G P / φ P
or

(
G L φ L = (G T G C G V G P ) / φ T + φ C + φ V + φ P )
We can say that the loop can have oscillatory response if φ L = nx (-360)s i.e. if the phase shift around the loop
is –360o or some integral multiple of 360o then if the gains G is appropriate, we can have dynamic
L

responses as shown earlier. If the phase shift around tbe1oop is not -360o or a multiple then there is no
possibility of oscillation or oscillatory response. Every real process is capable of oscillatory response,
unfortunately. If this were not true then the instability problem would not be a consideration. This is all
we'll say about phase shift, for every process we encounter will have sufficient phase shift to make our loop
potentially unstable, and our primary concern will be to maintain our loop gain, G L , at a value less than 1 for
stability.

Refer back to Fig. 4. In each case here φ L = −360 o since each is oscillatory in nature. C1 is overdamped and its
period is long. Investigating the loop gains for each case we will determine later that for C , G << 1 /-360o
1 L

C2, GL < 1 /-360o; C3, GL = 0.5 /-360o; C4, GL = 1 /-360o; C5, GL > 1 /-360o; ;

We can see here that as long as GL > 1 we will eventually reach a steady state response. If GL = 1 we have a
special case of uniform oscillation, and if GL > 1 we have instability. The rate of negative or positive damping
shown in C3 and C5 espectively, depends on where the GL is in.relation to 1. If GL = 1- (i.e. just a bit smaller
than 1), the loop will eventually stabilize but may take a long period of time to do so. If GL = 1+ if we wait
long enough, the loop will eventually become unstable.

Taking a qualitative look at the loop we can see how loop gain and phase combine to give us the various
dynamic responses. It may be instructive to point out here the physical importance of phase shift. Phase shift is
the point, in time where the response of the loop is reinforced, e.g. when bouncing a ball, the ball is hit at the
top of its bounce when velocity a 0 and thus its downward motion is reinforced. If the ball were hit at another
time i.e. when it was halfway up the bouncing would eventually dampen out. Phase shift is this reinforcement
of the oscillatory response in the control loop, at the correct time. We see from the example, that if the
oscillation were not reinforced we wouldn't need to concern ourselves with loop instability.

e.g.
Shown above is a control loop with the action of each component indicated. If we assume a GL = 1 /-360o
regardless of the individual gains and phase shifts, L if an upset q causes a change in the value of the dynamic
variable c, with negative feedback, the loop would try to move c in the opposite direction the same amount.

e.g. Initially, we get a decrease in load q. 1 this causes an increase in the dynamic variable 2 which causes an
increase in transmitter output 3 which causes a decrease in controller output 4 which causes a decrease in flow
through the valve 5 which causes a decrease in a, 6. If we follow this sequence around for several cycles we
see that an oscillation develops. This is greatly simplified, however.

Let's consider now the case of GL = 0.5 /-360o


Following through the scenario again, we see that a load upset q, 1 causes an increase in c, 2 causing an
increase in transmitter output 3 and so forth around the loop. In this case, however each time the signal
propagates around the loop, it comes back at an amplitude of .5 of what it was when it went out. This
eventually leads to a damping out of the measurement.

We can further see that if GL = 2 /-360o the signal would double each time it propagated around the loop and
would therefore soon become unstable.

Dynamic Response as a Function of Loop Phase Shift.

We discovered earlier that for oscillatory response to take place, the phase shift around the loop must be
-360o, and each component in the loop has some phase contribution to this -360o. We might go further in
stating if the φL < -360o there is no danger of oscillation and our loop gain can be an great as we desire, i.e.
GL >> 1. Indeed, we will learn later that the greater the loop gain, the tighter the control, The only limiting
factor is that in a real process there is always the necessary φL for potential instability if GL > 1.
Let us for a moment consider the controller

Depending on the desired action, the control algorithm may vary from controller to controller. Phase
contribution by the control algorithm depends upon the algorithm employed.

Let's consider the summer for a moment. This portion of the controller is present regardless of the control
algorithm employed.

Considering the output of the summer to be the error signal e, and the input to be the measurement c we can
investigate the phase shift across the summer. Assume some set point, r, and I/D action. (e = r-c)

If the input c. varies as shown, we see the error varying also. However, notice that the phase difference
between the error and measurement c is 180o i.e. There is a 180o phase lag between the input and output of
the summer. Since the summer will always be present we can see that -180o of the necessary -360o for
oscillatory response, unfortunately has already been supplied by the summer. This is true if the controller is
I/D. If it is I/I, this -180o in supplied by some other loop component.
Phase contributions by the remaining components, i.e. the final actuators process transmitter and control
algorithm will each in turn contribute sufficient phase shift to supply us with a loop phase shift of -360o and
under. the right circumstances cause the loop to become unstable.

Physical Characteristics of the Control Loop

We have considered the conditions of the control loop which will determine the type of response, We should
now investigate the characteristics of each component which will contribute to the gain function of each.

Every part of the control loop will have these characteristics to a greater or lesser degree, but they will all be
present to some extent,

Dead Time

Dead time is the property of a physical system by which the response to an applied forcing function is delayed
in its effect. It is the interval after the application of a force during which no response is observable. Dead time
is also referred to as pure-delay, transport lag, or distance-velocity lag.

e.g.

Suppose we had a device which was a pure dead time having a delay τ Dt If we apply a step input of amplitude
A. at time = t 1 , there would be no response until time t 2 , when we would get a step of amplitude A, out
delayed by an amount of the dead time, τ Dt .

As an example of a dead time dominant process we have the following:


We have here a conveyor belt, l ft. long, moving at some velocity, v. If the valve is open by some amount to
increase the material on the belt, there will be
 
l  ft 
τ Dt = = = min
v  ft 
 
 min 

a delay equal to τ Dt minutes before the increased weight is sensed at the weight transmitter.

Another example:

If we had liquid flow of velocity v, through a pipe of length l, we can see an analogous situation exists. If we
were to follow a slug of liquid through the pipe at the instant the valve is opened, we would see it takes an
amount τ Dt for the slug to go from one end of the pipe to the other. The delay times in these two cases would
not be nearly the same, but the delay effect is similar.

Proterties of Dead Time


Assume we have a pure dead time block:

and further suppose we supply a stop input to this block. We see that if the magnitude of the input step was A.
the magnitude of the output step would also be A, except displaced in time by an amount τ Dt .

A
The gain would therefore be : G ss = =1
A

and this is true for a pure dead time, for we see from the previous examples of dead time, that if we bad an
increase of 10 lbs. of material on the conveyor belt it would show up as an increase of 10 lbs. at the weight
sensor
10#
G ss = =1
10#

we see here that not only is the G ss = 1, but it is also dimensionless, since
the units of the input and output cancel.

Similarly we can show the same is true for the pipe.

In each of these cases we regard the process to be either the conveyor belt or the pipe, a pure dead time.
Suppose however, we consider the following:

i.e. the input to the valve is the input to the system and the output from the system is flow through the pipe. If
we increase the opening of the valve by some A% we would have an increase in flow of some B ft 3/sec.
delayed by an amount τ Dt where τ Dt is the time it takes to see an increase in flow upon a change in valve
position.
ft 3
B
∆ out
G ss = = sec
∆ in A%

We see in this case that not only are there units but also the ratio of B to A is not necessarily 1.

It should be kept in mind that for a pure d.t., i.e. the pipe alone, G ss = 1 . But for the case where another
component is involved in the dead time, G ss we will see, serves to supply units to the gain function.
Capacity

Capacity, for our interest, may be defined in one of two ways. As we shall see, both of these definitions are
equivalent.

Capacity is a volume where mass or energy are stored. This is a straightforward description of what capacity
is, but for our use it may not be complete. Capacity may also be defined as the opposition of a system to the
change of mass or energy stored in it. This definition is perhaps of greater importance in process control, for,
if we look back at dead time, we recall that the output was displaced in time only from the input, but then
instantaneously was identical to the input. The idea of capacity implies that perhaps this is not the case here
and that the output cannot change instantaneously. This proves to be true as we will see later. First, let's
consider a physical system which is capacity dominant.

e.g. a level tank

This is an example of a capacity, i.e. a volume where a mass of liquid is stored. Consider what would happen
to the level in the tank (h) if the inflows Fi, were increased. We would certainly expect the level to also
increase, but if Fi were increased by 10% the level wouldn't instantaneously also increase by 10%. It would
eventually reach a level 10% higher, but the capacity of the tank is opposing the change in level and therefore
it will take time to reach a 10% level increase.

Another example of a capacity but in this instance, one which stores energy in given by:

This is an oven which is storing beat to maintain a particular temperature, T. The gas jets create a flow of beat
in (Q in), and Q out is the escape of the heat to ambient.
We see that if we increase or decrease the position of the valve, the temperature would correspondingly
increase or decrease. It would not however, change instantaneously with a change in valve position. The.
capacity effect here is completely analogous to the level tank, only the form of that which is stored differs.

Capacity may be present not only in our process but in any one of our loop components as well.

In general, suppose we have a component which is characterized by a capacity:

Notice that if a step input is applied to the capacity, the output begins to change instantaneously but doesn't
reach its steady state value for a period of time. This is true of anything which is capacitive in nature. It takes
approximately 5τ or 5 time constants for the output of the capacity to reach its final, steady state value. A time
constant, τ, is defined as the amount of time it takes the output of the system to reach approximately 63.2% of
its steady state value. τ is a function of the physical system in general:

τ = RC

Where R is the resistance in the system and C is its capacity with the units of each being appropriate for the
system in question to make the time constant come out in time units, i.e. seconds, minutes, etc.

Some examples:

A. A mercury filled thermometer


i.e. the thermometer time constant, τ t , is the product of the resistance of the glass and the capacity of the
mercury. It is a measure of how fast the mercury will rise when subjected to a change in temperature.

B. A liquid level tank:

and this is a measure of how quickly the level will change with a change in inflow, Fi.

C. An electric circuit:

τ = RC and is a measure of bow quickly the voltage across the capacitor will reach the battery voltage, E,
when the switch is closed,

This capacity phenomenon may be found to a greater or lesser extent in almost all components of our loop.
Note that it will always take 5τ for the output of a system to reach its steady state value regardless of the size
of the input, Also remember that regardless of the nature of the system, its output will always change along the
capacity curve, so that its output may always be predicted at any time before it reaches its final steady state
value.

In this section we will investigate the various control modes used in automatic feedback control.

ON-OFF Control

The most rudimentary form of regulatory control is On-Off Control. It is primarily intended for use with final
actuators which are non-throttling in mature. i.e. some type of switch as opposed to a valve. An excellent
example of On-Off Control is the heating system in our homes. Whenever the temperature goes above the set
point, the beating plant is off. Whenever the temperature drops below the set point, the heating plant turns on.
i.e. we can say

m = 0% when c>r
m = 100% when c<r

The controller output = 0% or off, whenever the measurement exceeds the set point. The controller output =
100% or on, whenever the measurement is below the set point. The most useful type of process where On-Off
Control can be successfully applied is a large capacity as in our example above.

In this example our capacity is a room or even an entire house.

A large capacity is important due to the nature of the action of the controller.

Notice that @ t = 0, c < r and m=100%, when the measurement, c, crosses the set point, r, m = 100%, due to
dead time and capacity of the heating system and heat transfer to the ambient, when the controller turns off,
the temperature rises somewhat above the set point. When the temperature drops below the set point, the
controller turns on, and again, due to system dynamics, the temperature drops somewhat below the set point
before the effect of the beat is felt and c turns around and begins to rise. This action continues due to the
on-off nature of the controller.

Since the controller cannot throttle the final actuator, but only turn it on or off, the primary characteristic of
On-Off Control is that the measurement is always cycling about the set point, the rate at which the
measurement cycles and the deviation of the measurement from the set point is a function of the dead time and
capacity in the system. For, the longer the lag time, the slower will the cycling be, but also the deviation from
set point will be greater. This can better be shown by choosing an On-Off Controller with a differential
gap or dead band. This is usually the way most On-Off Controllers are built. They have an adjustable
differential gap or dead band, inside which, no control action takes place. The intent of this differential gap is
to minimize cycling of the controller output and thus the measurement. If the lag in the system is large
enough, deviation from set point will still be tolerable.
Notice here, that the controller switches off when the measurement exits the dead band on the high side and
doesn't turn on again until the measurement is outside the dead band on the low side. The frequency of cycling
is reduced, but the deviation from the set point is increased. If the dead band is reduced the frequency of
cycling is increased but deviation from set point is decreased.

Typically, the dead band is adjusted as a percentage of the measurement span. e.g. Suppose we had a
temperature control system whose measurement range was from 20o to 120o. Then if we set our dead band a
10%, the dead band In degrees would be .1 x 100o a 10o and if the set point were 75o, then the upper edge of
the dead band would be 75o +,5o = 80o and the lower edge of the dead band would be 75o - 5o = 70o for a
dead band width of 80o - 70o = 10o.

We need to remember, however, that with an On-Off Controller we can’t eliminate cycling. With a large lag in
our process, the deviation from set point may not be perceptible, and if this is sufficient, an On-Off Controller
may be used. In order to totally eliminate cycling, however, we need to go to another control mode rather than
On-Off Control.

Proportional Control

The proportional controller is the minimum controller configuration which will tend to damp out Oscillations
in the loop, That is its primary job. As we will see, it will stop the measurement from cycling, but not
necessarily return it to the set point.

e.g. Suppose we have a liquid level we desire to control only to the extent that we don't want the tank to
overflow or run dry,
If Fi = Fo then the level, as seen in the sight glass, remains constant. Suppose we watch the sight glass for any
changes in level. If the outflow, Fo, increases such

Fo > Fi

then the level will begin to drop. In order to stop the level from dropping, we need to increase Fi, such that Fi
= Fo. An the level drops, we increase Fi. watching the sight glass, all the while increasing the inflow, When Fi
= Fo the level stops dropping, but it is no longer at the initial level, it has dropped. The amount it dropped
depends on how much we opened the inflow valve to make Fi = Fo. A similar situation would occur if Fo <
Fi, only in this case the level would rise until we readjusted the inflow to equal the outflow, What has Just
been described As proportional action. It is exactly what a Proportional Controller would do if it were
connected to the liquid level tank.

In general we can say that the output of a proportional controller is proportional to the error (i.e. deviationof
the measurement from the set point).
mαe

or removing the proportionality

m = Ke

where K is called the controller gain.

It should be noticed that the proportional controller is nothing but an amplifier. i.e. Its output is the error
multiplied by a gain, K.
e.g. Let's apply this controller to our process:

Suppose now we were to place the controller in manual and manually adjust the level in the tank to equal to
the set point. With Fi = Fo the level should stay at the set point. Also, suppose Fo = 50% = Fi and c = r = 50%.
Suppose also we adjust K = 2. Nov if the controller is placed in auto, what will its output be?

Well, at the instant the controller is placed in auto, the error = 0 since c = r and therefore the controller output
would be
m = 2(50-50) = 0

If the controller output is 0, what will the level do?

It will begin to go down. How can we stop it from going down? We need to make Fi = Fo = 50% again. How
can we do this? Well, assuming we have a linear relationship between controller output and inflow, if we want
Fi = 50%, then m = 50%

and since m = Ke = 2e = 2(r-c) = 2(50-c)

we require that for m = 50%, e = 25%

then m = 2(50-25) = 50%

i.e. The controller output will go to 50% when the measurement drops by 25%, creating a 25% error, and the
Fi = Fo = 50%.

So, for this case, in order to stop the level from dropping, it had to drop by 25% to create a large enough error
so the controller could make Fi = Fo.
Now suppose we adjust K = 4 so now

m = 4e

and the error now would only need to be +12.5% for

m = 4(12.5) = 50%

It seems that the larger we make the controller gain, the smaller will be the error, so if K is very large, the
error will be very small.

The fallacy in our thinking here is that as we make K large to make the error small, the gain of the controller,
K. is multiplied in with the gains of the other components and if the K becomes large enough, the loop gain, G
will be greater that I and the loop will become unstable. So, because of this, we can't just arbitrarily increase K
to minimize the error, e.

There is, however, another way, under certain circumstances where we might be able to make the error zero.
Suppose we add another term to our control equation. Let's call this term the bias.

i.e. m = Ke + b

Where b is the bias and it is simply defined as the output of the controller when the error is zero. Suppose K =
2 and we manually adjust c = r = 50% and Fi = Fo = 50%. Also let's adjust b = 50%.

Now when we put the controller in auto what will happen?

Well, since c = r, then e = 0 and 2(e) = 2(0) = 0 there will be no proportional contribution to the output and the
output, m = b = 50% and since Fo = 50% and m = Fi = 50%, the level will stay right where it is. Where
previously when b = 0, we ended up with e = 25%, now with b = 50%, we have e = 0. In general, if the bias
equals to the load (b = Fo in this case), the error will always be zero. Suppose now Fo goes to 75%, in order to
stop the level from dropping m = Fi = 75% now since

m = 2(e) + 50% = 2(50-c) + 50%

c must drop to 37.5%, then

m = 2(50-37.5) +50 = 2(12.5) + 50 = 75%

and the level would stop dropping @ c = 37.5%.

This would also work if Fo was decreased. Suppose Fo = 25% then the level would rise until m = Fi = 25%
this would require

m = 2(50-62.5) + 50 = 2(-12.5) + 50 = 25%

and the level would stop rising @ c = 62.5%.


Note that we could make the error smaller by increasing K but we need to be careful that we don't increase K
so much that it makes the loop unstable.

One point we ought to notice is that how fast is the controller output changing to stop the measurement?

It is changing as fast as the error is changing. The error is changing as a function of the rate of change of
measurement, which is a function of the size of the tank, among other things.

Therefore if we made K such a value that the loop gain was equal to 1, the loop would oscillate at a period
which was a function of the natural characteristics of the process. This is called the natural period. The natural
period is defined as the period of oscillation under proportional only control.

If we were to adjust K such that the loop gains were equal to .5 and then made a change in Fo, i.e.

We would see the measurement quarter amplitude damp out with a period approximately equal to the natural
period and stop with an offset which was a function of both the gain K and the bias. This is the type of
response we can expect from a loop under proportional control.

Now, let's look at the equation for a P-only controller again

m = Ke + b

recall that gain of a device is defined as:

∆ output
G=
∆ input
Now looking at the block diagram of a proportional controller:

recall also that there is a 1:1 relationship between c and e only a –180o phase difference if the controller is in
I/D mode.

So we can say the gain of the proportional controller is

∆m
G=K=
∆e
i.e. the gain is the ratio of the change in controller-output to a change in error, but we can also say:

∆m
K= since ∆e = ∆c
∆c

or the gain is also defined as a change in controller output to a change in measurement.

Now, assume that we have a linear relationship between c and m, i.e.:


∆ m 100%
so we can say K= =
∆c ∆c

or the gain K is the amount that the measurement must change to make the controller output change by 100%.

As you may recall, the gain of a transmitter is given by

∆ out 100%
G= =
∆ in span
i.e. the input of the transmitter changes the amount of the input span

(span = upper range value - lower range value) to make the transmitter output change by 100%.

In the case of the controller we have a similar situation, but instead of calling ∆c the span as in the case of the
transmitter, we call it the proportional band. In other words we can define the proportional band as that change
in measurement which will cause the output of the controller to change by 100%

∆ m 100%
i. e. K= =
∆c PB%

e.g. if we adjusted our PB setting on our controller to PB = 40% this means that as the output of the
transmitter (which is the measurement to the controller) changes over 40% of its output span, the output of the
controller would change by 100%, or the gain, K. would be:

100%
K= = 2.5
40%

Some manufacturers have a gain adjustment, some have a proportional band adjustment remember only that

 1 
Kα 
 PB 
or as the PB gets larger, the gain gets smaller and vice versa.

We can now write the proportional controller equation as:

 100 
m= (e ) + b e = r – c (I/D)
 PB 
e = c – r (I/I)

We can also solve this equation for the error and this will give us an idea of where we might apply a
proportional controller.

 PB 
e= (m − b )
 100 
This equation gives us the error as a function of PB, m, and b.

In order to make the error = 0, we can:

1. set PB = 0 (K = ∞)
2. set b = m

Either one of these steps will make the above equation go to zero.

The first step however, we saw earlier, was not plausible since as PB → 0, K → ∞ and the loop becomes
unstable. Furthermore, it's not possible to set PB = 0 on many controllers the minimum setting is usually 2% -
5%. However, consider that if PB were very small (e.g. PB = 2%) the error would certainly be minimized
under these conditions if the loop was stable.

Consider

 1 
If the G V × G P × G T <   then the loop would be stable since
 50 

G L < 1 for stability

i.e. If we had a process which had a very low gain, we could have a higher gain (smaller PB) in our controller
and thus minimize our error. One type of process where this is true is a very large capacity. e.g. A large liquid
level tank. Due to its low gain, we can successfully use a P - only controller.

Also, some controllers have an adjustable bias. If we were to adjust b = m in the equation

 PB 
e= (m − b )
 100 
then the error would go to zero. This would certainly be possible to do on any process but preferably one
which has few load upsets, since we would have to readjust the bias each time there was a new load upset
(recall that there would be no error as long as the bias vas equal to the load), so if we had a process with
infrequent load upsets, which allowed us to readjust the bias for zero error, we would be able to make good
use of a P - only controller.

In general, a Proportional Controller gives us fast response ( τ n ) as compared to other controllers we will
investigate but a sustained error is its primary characteristic.

If we desire to eliminate any error which might exist, we need to investigate a different control mode.

Integral Control

The action of the integral control mode is to remove any error which may exist. i.e. As long as there is an error
present, the output of this controller continues to move in a direction to eliminate this error. The equation for
an Integral Controller is:

1
M =   ∫ e dt + m o
I

Where mo is the controller output before integration on a given error begins; mo = 0 when power is first
applied to the controller.

We should now investigate the action of the above algorithm for a given error, (assume I/D action):
Suppose we had our controller sitting by itself on a bench in a test setup:

As shown on the previous page, if the measurement were increased in a step-vise fashion @ t = ti and then
returned to the set point @ t = t2, we would see the output ramp over the interval t1 < t < t2, since it is in effect
integrating the step input. When the measurement is returned to r @ t = t2, the output would hold the value it
had integrated to, since it would tink that was the correct value to bring the measurement to the set point.

The rate at which the controller output ramps is a function of 2 things; the Integral time, I, and the magnitude
of the error. Realize that the controller output, m, would ramp in the opposite direction if the measurement had
been moved below the set point.

The Integral time, 1, is defined as the amount of time it takes the controller output to change the amount of the
error, i.e. the amount of time required to "repeat" the error. Thus I is sometimes measured in "minutes per
repeat". However, notice that the equation for controller output is

1
M =   ∫ e dt
I

1 1 rep
Because of this some manufacturers measure I in "repeats/minute" since = = .
I min/ rep min

Because of this reciprocal relationship we should recognize that if our controller is adjustable in min/rep. then
increasing the adjustment gives us less integral action, whereas in rep/min, increasing the number gives us
greater integral action.
e.g. for I in min/rep

where I1 < I 2 < I 3

Usually, we will treat I as a gain adjustment. We just have to remember whether increasing the value of I will
give us greater gain (rep/min) or less gain (min/rep).

Another consideration in that for a fixed I, the rate of change of m will also depend on the magnitude of e:
So we see that the rate of change of controller output is a function of both I and e. When adjusting an Integral
Controller for optimum response, we adjust I in very much the same way we adjusted the PB for the
Proportional Controller.

We should now, however, consider the difference in response time of the integral and proportional controller.
We mentioned earlier that the output of the P-only controller changed as quickly as the measurement changed:

So that if the measurement changes as a step, the controller output would also change as a step in an amount
depending on the gain, K = 100/PB. Recall, however, that with a stet input to an Integral Controller, the output
doesn't change instantaneously but at a rate which is affected by I and e as we have seen.

Putting these 2 types of controllers in a loop to control a process, provides different types of responses. While
the Integral Controller will provide the mechanism to return the measurement back to the set point, due to the
additional lag introduced by this mechanism, the overall response of this loop will be much slower than that
under proportional control. So the trade off we make here is that if we require a return to set point and use the
Integral Controller, we must be satisfied with a slow period of response.
The period of response for the measurement under integral control ( C I ) can be about 10 τ n .

If we require a return to set point (i.e. no Sustained error) and would like a faster response time, we need to
investigate a control mode which is composed both of proportion and integral action.
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 7

Selective Control Systems


SELECTIVE CONTROL SYSTEMS

Frequently a situation is encountered where 2 or more variables must not be allowed to pass specified limits
for reasons of economy, efficiency or safety. If the number of controlled variables, is greater than the number
of manipulated variables, which over of the measurements are most in need must logically be selected for
control. In this section we will investigate specific examples of such selectors. It must be kept in mind
however, that these are only a few examples and by no mean limit the use of these auto selectors.

The two basic building blocks for selector systems are the high selector and the low selector.
i.e.

This high selector will pass the highest value of n inputs to the output while dead ending all other inputs
(comparison in within ± 5%)

This low selector will choose the lowest of n inputs to pass through to the output while dead ending all other
inputs.

These selectors are available in both electronic and pneumatic versions and operate similarly. The only
difference being the number of inputs a particular hardware implementation may be able to handle.

By using combinations of these basic building blocks we may build other types of selectors

e.a. the median selector:


This selector will pass through to the output, the signal which falls between the highest and lowest input.

Let's investigate some typical applications of these selectors in 4 areas:

1. Protection of equipment

2. Auctioneering (choosing from several signals.)

3. Redundant instrumentation (used commonly with process analytical equipment.)

4. Artificial measurements (establishina artificial limits.)

Protection of Equipment
For this pump system we require:

1. Surge Protection - When Pin drops below a certain minimum value, close the valve.

2. Overtemp - When-the temperature of the Pump exceeds certain max temperature, close the valve

3. Excessive Down stream pressure-When P exceeds a certain max P close the valve (assume Po < P shut off )

Here we have:
- multiple measurements
- multiple controllers
- 1 manipulated variable

Case l:
Surge protection; as Pin begins to drop, the output m, will also decrease (note Increase/Increase action on
pressure controller.) m, will be selected by the first and second low selector and will be passed through as the
manipulated variable m to close the valve.

Cases 2&3:
As either the pump temperature or outlet pressure begins to increase the outputs m 2 and m 3 begin to decrease
(note Increase/Decrease action on both of these controllers.) The smallest value will be chosen and passed thru
to manipulate the valve.

In General the smallest output from either of the 3 controllers will always be operating the valve. The external
reset line is implemented so that when 2 of the controllers are not selected their outputs are dead ended and
there rust be some provision to prevent reset windup. The external reset line which is connected to m will
always ensure that the unselected controllers won't wind up.

Auctioneering

We desire to protect against the highest temperature sensed by one of 4 temperature transmitters. We have
here:

-1 controller
- multiple transmitters
- 1 final actuator
The highest temperature will be selected by the high selectors and will he passed thru as the measurement to
control the fuel to the oven. Note: no danger of reset windup here because there are no dead end controllers.

Redundant Instrumentation - Possible plant protection.

If we have an exothermic reactor where too much catalyst might prove disastrous we might implement the
following failsafe scheme with 2 analytic transmitters (analyzers) and a hiqh selector.

This in a failsafe installation with the measurement from the highest reading analyzer being utilized by the
analytic controller to control catalyst flow.

1. Down scale failure of analyzer - If one analyzer fails to zero, the other will be selected to control catalyst
input - Production not interrupted.

2. Upscale failure of analyzer - If one analyzer fails to full scale it will get selected and-the catalyst shut off.
Production stopped but a possible hazardous situation avoided
Here is an alternate method using 3 analyzers and a median selector which will keep the process acing
regardless of the mode of failure of one of the analyzers.

The measurement to the controller will always be the median transmitter output. If one of the analyzers fails,
either up or downscale the selector will still choose the median value and it won't be the failed unit.

ARTIFICIAL MEASUREMENTS - Artificial limits chosen as a possible operating condition.

e.g. Suppose we have a distillation column whose FEED VS. STEAM characteristic is as shown:
Suppose further, that we felt if FEED rate dropped to zero we didn't want steam flow also to go to 0%, but to
some minimum e.g. 10%. Further, if we set maximum feed we desire to hold our max steam flow to 90% as a
high limit. We have thus created a non linear operating characteristic as shown above. Let's see how this might
possibly be implemented:

Assume our feed was within the safe range. The signal from the multiplier would be passed through the high
selector, since it was higher than the low limit and through the low selector since it was lower than the high
limit) to the set point input of the steam flow loop. If the feed signal fell below the low limit or above the high
limit the proper limit would be selected and that limit would be a constant high or low signal to the steam flow
loop.

An alternative to this setup in that of replacing the high and low selectors with a median selector, because this
is what we have now. This perhaps would require more hardware if this were an analog loop so it wouldn't be
practical, but with computer control there may already be a software median selector available and could be
put to use.
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 8

Proportional Plus Integral Control


Proportional Plus Integral Control

Using a P+I Controller will give us a return to Set point at a response period which is longer that of a P-only
Controller but much shorter that an I-only Controller.

The response period of a measurement under P+I Control ( C P + I ) is approximately 50% longer that the
P-only response period (1.33 τ n ). Because this response is much faster that I only and only somewhat longer
than P-only control, the majority of controllers found In the plant will be P+I Controllers.

This equation for a P+I Controller is given by:

100  1   100   100  1 


m=  e + ∫ e dt  =  (e ) +    ∫ e dt
PB  I   PB   PB  I 
Notice here that the proportional gain has an effect not only on the error, but also on the integral action as
well.

Compare this above equation to that for a Proportional Controller:

 100 
m= (e ) + b
 PB 

And we recognize that the bias term in the Proportional Controller has been replaced by the integral term in
the P+I Controller. i.e. in fact

 100  1 
b=  ∫ e dt 
 PB  I 

And recall that one way of eliminating offset in the proportional controller vas to manually adjust the bias to
equal the load. In this case, the integral action is providing us with a bias which is automatically being
adjusted to eliminate any error which exists.

Let's now investigate why the P+I Controller is faster in response that the I-only controller. As it turns out, it is
due to the addition of the proportional action:

Where previously it took I minutes for the output of the I-only controller to repeat the error, as we see above,
100
due to proportional action we immediately get a proportional stop = e and then the integral action; where
PB
due to the proportional effect on the I time, we define the integral time, I, here to be the amount of time it
takes for the integral portion of the controller to repeat the proportional action. When the measurement is
returned to the set point, we lose the proportional action (since e = 0) and the controller output is held solely
by the integral circuit.

We can treat both PB and I simply as gains which vary the overall controller gain and are used to adjust the
controller gain to give us the P+I loop response. Note that in the equation for the P+I Controller,

 100   100  1 
m= (e ) +   ∫ e dt 
 PB   PB  I 

100
We have a sum of 2 component gains, i.e. the proportional gain, K = , and the effective integral gain,
PB
 100  1 
G=    .
 PB  I 

The overall controller gain is the sum of these 2 gains. It is not a straight arithmetic sum, however. There is a
phase difference between the proportional and integral action and therefore the gain sum is a vector sum:

and φ P +I is the phase angle of the controller which contributes to the overall loop phase shift.

In adjusting a controller to give us a quarter amplitude damped loop response, we want to select a value of PB
and I which will give us a suitable G P +I for the desired response.
Looking at the vector diagram we can see that almost any values of PB and I will give us a useable G P +I. If
we arbitrarily choose a PB, we can then select an I which will make G P +I sufficient to give us quarter
amplitude damping, but at varying phase angles, φ P +I.

The important thing to remember here is that as the phase angle, φ P +I changes, while the damping may be
made to remain constant, the period of response also varies.

i.e. Suppose we set I = ∞,this would make G I = 0, regardless of the setting of PB and then K = G P +I and we
would have a proportional controller with φ P +I = 0o. In effect our response period would be that of a P-only
controller and equal to τ n with a sustained error. While we can't set I = ∞, we could set I to a very large number
in min/rep and therefore minimize integral action.

On the other hand, suppose we set I very small, then G P +I would approach G I , since G I >> K and φ P +I
would approach –90o. The control action in the loop would now be that of Integral only control, i.e. a return to
set point with a long response period.

These are the 2 extremes. Somewhere in between 0 < φ P +I < –90o is a phase angle which will give us a return
to set point with a period of response equal to 1.5 τ n . This angle is about -30o. We will say more about this
when we consider tuning controllers.

In general, if we start at φ P +I = 0o or proportional action, as we add more integral action, the measurement
begins returning to the set point. We only want enough integral gain to get us back to the set point, since a,
phase angle φ P +I, greater than this will only serve to slow down our response period. Remember also as we
add more integral gain by reducing I in min/rep we need to compensate for this added gain by reducing our
proportional gain by widening our Proportional Band.

We ought to remember that the value of φ P +I has an effect on our response period while G P +I has an effect
on our damping.

Recognize also, that adjusting I will have en effect on G I above and thus effect both G P +I and φ P +I and these
in turn effect both damping and period of response. Adjusting PB affects both G I and K equally thus PB only
has an effect on G P +I which affects the damping and not the period of response. We will consider this again
when talking about controller tuning.

Although the period of response of a loop under P+I Control is only 50% longer than a loop under P-only
Control, this may in fact be too long if τ n is 3 or 4 hours. In order to increase the speed of response (decrease
response period) of our loop we need to investigate another control mode.

Derivative Action

While we may sometimes run into an I-only Controller, it is not very often used due to the large increase in
response period it produces. A derivative only controller doesn't even exist. It's minimum configuration is
along with proportional action, but before we go further we should investigate what derivative action is.
Here we have a derivative block. Its output is some gain factor, D (called derivative time) multiplied by the
derivative of rate of change in the inputs

Lets investigate how the output from this derivative block would look for different inputs and a fixed value of
D.

Note, that as the rate op change of the input gets greeters the output gets larger. Since the slope of each of
those input signals is constant, the output for each constant rate input will be constant. Notice, however, what
happens as the slope approaches infinity (i.e. is a step which rises in zero time) we theoretically would get a
pulse out that was 0 time long and infinite amplitude. We wouldn't ever physically have an output like this.
since a perfect step with zero rise time is physically unrealizable, but we might got a signal which ,has short
rise and fall times and therefore the output from our derivative block would be a series of positive and
negative pulses trying to drive the final actuator, and this would result in accelerated wear on the valve,

e.g. Consider a temperature measurement with small amplitude, fast rise and fall time noise riding on it.
We might think that since the noise is such a small amplitude in comparison with the temperature signal that it
wouldn't even be noticed by a controller. This is true if the controller doesn't have derivative action in it, then
there would be no problem. However, if the controller contained derivative action, and here we must
remember that derivative doesn't look at the magnitude of the measurement but rather at the rate of change of
the measurement. Since the rise time and, all time of noise is very short, the temperature signal would be
totally masked by the noise into the derivative circuit of the controller and the controller output would be a
series of large amplitude pulses, totally masking any output contributed by the other control modes.
Fortunately in a case such as this, the noise is either easily filtered out or may be eliminated if the installation
of the primary sensor is incorrect and is modified.

There are cases, however, where noise is inherent to the measurement and the rise and fall times of the noise is
of the same magnitude as that of the measurement itself. In a case such as this, filtering would only serve to
degrade the accuracy of the measurement as well as filter the noise. A good example of a situation like this, is
flow control. Flow measurement by its very nature is noisy, therefore whenever we encounter a noisy
measurement such as this, we cannot usefully apply derivative action, and it is recommended that we don't
attempt to aptly a controller containing derivative action to this situation. We will see later that in many cases
where we aren't advised to use derivative action in a loop, it really wouldn't help us if we could apply it
anyway.

Let's now investigate the minimum configuration controller containing derivative action. This is the
Proportional plus Derivative controller. It is not very often used (primarily applied in batch pH control loops),
but it will help us to define the derivative time, D, mentioned earlier.

The equation for the P+D controller is given as:

100  de 
m= e + D  + b
PB  dt 

Notice in this equation there is a bias. There will be a bias in any control algorithm which doesn't have integral
action, since integral action is in effect an automatically adjusting bias. Note also, that the proportional gain
acts on the error as well as the derivative time, D, in a very similar manner to that seen in the P+D controller.

Let’s consider this controller and what its output would look like if we applied some test signals to it while it
was on the bench:
m P is the proportional portion of the output, m D is the derivative portion. The measurement changes with a
fixed rate of change, therefor the derivative portion of the output is constant depending an the rate of change,
de
of measurement and the derivative time, D, as well as the proportional gain. The proportional output is
dt
also a ramp whose slope is a function of the proportional gain.

Now let's superimpose m P and m D to get the actual output duo to both modes.
Notice that for a ramp input it takes some period of time for the proportional action to reach the amount of the
derivative action. This period of time is called the derivative time, D, measured in minutes. Increasing the
derivative time, D, increases m D , so because of this, we can simply think of D as a gain factor.

Another consideration is that in the equation for the P+D controller:

100  de 
m= e + D  + b
PB  dt 

Notice that the derivative action is on the error and since

e = r - c for I/D,

de dr dc
may be a function of both and
dt dt dt

de dr dc
i.e. = -
dt dt dt

dc
If we get a load upset to our process, this in turn causes the measurement to change at some rate, , This in
dt
de dc dr
turn gives us = - since we have no set point change so =0. Nov if we make a set point change of even
dt dt dt
dr
a few percent and the set point is changed quickly, then can become very large and a large pulse could be
dt
generated at the output of the controller. To overcome this possible problem many controllers don't recognize
a set point change;

de dr dc
i.e. = -
dt dt dt

de dc
so =-
dt dt

100  dc 
and m= e − D  + b
PB  dt 

That is, we will get no derivative action on a set point change, only proportional action. On a load upset we
will got both proportional and derivative action. This is the way many controllers having derivative action,
work.

Nov, lets compare the response of a control loop to a load upset both under P-only and P+D Control.
The response of the measurement under P+D control, ( C P +D), is faster and ends up with a smaller offset than
the loop under P-only control. This faster response is due to the nature of the derivative action.

We can also again add the proportional and derivative gains together to get the total gain of the controller
similarly to the way we did in the P+I case. Once again it is a vector sum except that the derivative gain is at a
+90o phase angle from the proportional gain.

As earlier, we can see that the derivative time, D, has an effect on both the damping and response period since
it in turn effects the resultant vector, G P + D and the phase angle, φ P + D, while proportional band has an
effect only on damping since it effects only the length of the resultant.

As the phase angle, φ P + D, gets larger, the response periods get shorter, however, as we make the gain, more
and more derivative in nature (i.e. larger , φ P + D) the controller becomes hypersensitive to noise generated in
its own circuits and control is lost.
dc
Remember also that derivative action is made up of the derivative time, D, and .
dt

In the P+I Controller, if we wanted to minimize integral action we would set I to a large number of
minutes/repeat. This would not make the integral gain vector, G I , go to zero, but would be a very small value
and the controller would be essentially P-only. In the P+D Controller if we set D to a very small value, (we
can't set it to zero) there is a possibility that we might still get a sizable derivative contribution if we get a
dc
noisy input (so that is large).
dt

On electronic controllers we can turn derivative action off and derivative is effectively eliminated. In a
pneumatic controller we can't turn the derivative off, only to a certain minimum value (approximately .01
minutes) so if we attempted to use this controller on a flow loop we could still get considerable derivative
action due to the noisy flow measurement. It is therefore important, when applying a pneumatic controller to a
noisy loop, such as a flow loop, make certain the controller contains no derivative circuitry.

The reason we are interested in derivative action is so that we can combine it with proportional and integral
action to get a 3 mode, PID controller.

Proportional Integral Derivative Control

The PID 3 mode controller is used to provide us with a response period the same as with proportional control
but with a return to set point. The derivative action adds the additional speed required to overcome the
slowing down of the response resulting when integral action was added to remove the offset caused by
proportional control.

The equation for the 3 mode PID controller is given by:

100  1 dc 
m=  e + ∫ e dt - D 
PB  I dt 

This is a combination of the 3 control actions we have studied. The total gain of this controller is the vector
sum of the 3 gains:
Where: (
G PID = K 2 + (G D − G I )
2
)
And depending on which is larger, G I or G D , this resultant may fall in either the first ( G D > G I ) or fourth
quadrants ( G I > G D ). When the controller is adjusted correctly, G I = G D and the resultant falls on the x axis
with the phase φ PID = 0 o and this will give us the speed of proportional response with a return to set point.

Let's compare the various responses to a load upset:

The addition of the derivative mode has once more given us the response of P-only with the return to set point
provided by integral action.

Adjusting the controller will be covered when we consider controller tuning for optimum response.

Choosing the Correct Controller

Now that we have investigated the various control modes, it might be appropriate for us to be able to choose a
particular control mode for our process. Refer to the flow diagram on the following page.

Starting at the top, we come to a-decision block which asks the question, "Can offset be tolerated?". If we
answer yes, we can use a P-only controller here. If the answer is no, the next block asks if there is noise
present. If there is noise, we are required to use P + I control, if no noise, proceed to the next block. We come
to a block which asks if dead time is excessive. If the ratio of the dead time to capacity time constant of our
process is greater than .5 we can assume the process to be dead time dominant and need to use a P + I
controller since derivative action is intended primarily to cancel out lag effect on slow response due to dead
time. If our process doesn't have excessive dead time, then the next block asks if the capacity is extremely
small. If it is, then use a P + I controller for, if we have short dead time and small capacity we don't require
derivative action to speed up our response, it is already fast enough. In this instance for example, a flow loop,
we might even consider an I-only controller since the loop is so fast, slowing down the response through use
of integral only action will still provide fast enough response for the majority of applications.

Finally if our capacity is large, we can put a PID controller to good use.

Recall that earlier it was mentioned that the P + I controller is the most common controller found in the plant.
Looking at this decision diagram we can see why there are three possible ways to get to P +I, while we are
required to proceed through four decision blocks before we get to PID.

We should remember, that while PID action seems to be the most versatile, it is not always required and we
shouldn't try to apply it where it can't effectively be used. For example, some people may say they have
applied PID control successfully to a flow loop. What they have probably done is this:
i.e. They have probably added derivative action, but in order to make the controller stable, they have added
twice as much integral gain to swamp out the derivative gain and the resultant is the same phase angle and
length as if they had no derivative gain and half the integral gain. Remember, if someone is applying a con-
troller in an unorthodox manner they are probably trading off somewhere else to the point where response is
not improved over what it would be if they had applied it according to our decision chart.

BASIC CONTROLLERS

The following list contains some of the features and options that should be considered when selecting a
controller.

1. Do you have the proper control mode selected for your application? (P, P+I, PID)

2. Have the proper input and output ranges been selected? (3-15 psi, 4-20 madc, etc.)

3. If you have an electronic controller, are you aware of the output load resistance range?

4. Have you considered emergency service provisions? ex: (manual control units)

5. What provisions are there relating to maintenance accessibility and convenience?

6. What are the specifications for panel readability and accuracy?

7. What are the specifications relating to control repeatability and accuracy?

8. Are you aware of controller tuning ranges and resolution? ex: PB 3-300%, 5-500%, integral adjustment
ranges 0-60 min./repeat.

9. If your application involves operating in a hazardous environment, what is the electrical classification for
your equipment?

10. What are the power requirements and is there any need for regulation?

11. Is mounting flexibility and density for mounting devices a consideration? ex: field mounted local
controllers; panel mounted for centralized control area.

12. Does your controller have switches for local/remote setpoint, direct/reverse action, auto/manual operation
etc.?
13. What type of balancing procedures and accuracy’s are available on your controller? i.e. humpleas -
balanceless transfer from auto/man.

14. Do you have the capability of incorporating alarm modules or lights with your device?

15. Do you have output limits available on your controller?

16. Are there any anti-wind-up features available in your controller?

17. What options are available for specialized control considerations? ex: external reset, external bias,
ratio-control, etc.

18. How well does the controller adapt to computer operation? (computer compatibility)

Tuning Feedback Controllers

There is no right or wrong way to tune a controller. The settings of the controller depend on the error criteria
which is used as a basis for tuning. A controller would be tuned differently for stability due to setpoint
changes (no derivative action in a PID controller) or for transient upsets. Most important for process control
systems are transient upsets, and many of the methods we will study are optimized for this type of upset.

Depending on the process to be controlled, the first consideration might be to decide what type of response is
optimum. e.g.

Three possible responses are:

1. overdamped - slow response with no oscillation

2. critically damped - fastest response without oscillation

3. underdamped --fast return to setpoint but with considerable oscillation


4. Quarter amplitude damping - lies between critical and underdamping. This is a tradeoff between
minimum deviation from setpoint with an upset, and fastest return to setpoint. The penalty is some
oscillation.

If the need for stability is paramount, use critical damping. This, however, is not minimum integrated error.
The fastest return to setpoint, but with considerable oscillation, is the underdamped case. This method gives
minimum integrated error.

Integrated error

For optimum response, the tuning criteria selected should address itself to attempting to minimize the area
under the above curve. This curve represents the response of a loop due to an upset. It shows how the
measurement responds in returning to the setpoint, r. The integrated error is the area under this curve.
Practically speaking, it represents off-spec. product. It would be in our best interests to attempt to minimize
this area, i.e. the integrated error.

It may not always be possible to minimize this criteria without paying a penalty in some respect. (Re: the
underdamped case produces minimum area under the response curve but with considerable oscillation.

Consider a process which is controlled by a proportional only controller. We have only one adjustment on this
controller, the Proportional Band setting. With only one adjustment, it may not be able to satisfy all response
criteria, i.e. fast, stable return to setpoint with no oscillation. Indeed, we find that Quarter amplitude damping
(QAD) is the optimum case for a process under P-only control. While QAD doesn't minimize integrated error,
it is a worthwhile compromise as mentioned earlier.

But quarter amplitude damping is only one criteria for tuning which may be chosen, there are other methods
which have been developed to minimize various error criteria.

1. Integrated error - this has already been mentioned, but let's state it in-a more analytical form:

IE = ∫ e dt
0
This is simply attempting to minimize the error integrated over time. This method may not be 100% reliable if
there is no averaging elsewhere in the process. For if there is a sinusoidal oscillation about the setpoint, the
positive and negative areas tend to cancel each other out and present a misleading conclusion.

i.e.

However, barring this situation it may be a perfectly adequate error criteria.

2. Integrated Absolute error - essentially takes the absolute value ot the error and removes the objection to
IE, above.

IAE = ∫ e dt
0

3. Integrated Squared error - Larger errors are penalized greater than smaller errors and this gives a more
conservative response (i.e. returns to setpoint faster)


ISE = ∫ e 2 dt
0

4. Integrated Time Absolute error - errors existing over time are penalized even though they may be small
and gives a more heavily damped response.


ITAE = ∫ e τ dt
0
Composed below are the various responses if loops are-tuned to these various criteria. Note that they are not
greatly different from each other or from Quarter amplitude damping.

Before we investigate controller settings to give us these various responses, we first ought to review the effect
Proportional Band, Integral time, and Derivative time have on our loop response.
If our gain vector, G PID , lies along the positive real axis, (case the response period will be identical to τ n . If,
however, the G PID vector is either in the 4th or 1st quadrant, (cases 2 and 3 respectively) our response will
be > τ n or < τ n , respectively.

What is more important, however, is to recognize what settings should be changed, if the response we have is
not what we desire. Recognize that the major effect of changing the proportional gain g P will be to change the
damping of the response. Changing either g d or g I , will change both the damping as well as the period of the
response.

For example, suppose we find our damping (QAD or otherwise) to be nearly what we need, but the period of
response, τ o is too long. What we need to do is to maintain our loop gain constant, but either increase
derivative action or decrease integral action, but as stated earlier, changing either one alone will change τ o ,
but will also change the gain vector which will in turn affect loop gain. The correct procedure in this case
would be to increase derivative gain G D , by increasing derivative time D while at the same time
decrease integral gain G D by increasing integral time I. This will tend to increase derivative action while
maintaining the length of the G PID vector constant. As a result, damping will remain unchanged while response
period τ o is decreased. A similar process can be followed for any other un8esired response as long as we
realize the effects the controller tuning settings have on our loop response.

OPTIMUM SETTINGS FOR AUTOMATIC CONTROLLERS

Process Reaction Method

I. Generation of Reaction Curve:

1. open loop so no control action occurs

2. Introduce a small disturbance

3. Record the reaction of the measurement (c) of the process


L - Lag time in minutes

T - Minutes required for line tangent to measurement to change ∆C P = ∆C

P - Initial disturbance in %

∆C p
N= ; Reaction Rate (%/min.)
T

NL
R= ; Lag Ratio
∆C p

II. optimum Controller Settings for QAD, Ziegler-Nichols:

P
P only KP =
NL

 P 
P&I K P = .9  TI = 3.33(L)
 NL 

 P 
PID K P = 1.2  TI = 2(L) TD = .5(L )
 NL 

III. Optimum Controller Settings for QAD, Cohen-Coon:


P  1 NL 
P only KP = 1 + 
NL  3 ∆C p 

P  R  30 + 3R 
P&I KP = .9 +  TI =  L
NL  12   9 + 20R 

∆P  R  32 + 6R   4 
PID KP = 1.33 +  TI =  L TD =  L
NL  4  13 + 8R  11 + 2R 

CONSTANT CYCLING METHOD I

P-ONLY CONTROLLER

1. Place controller in manual.

2. Increase proportional band to some high value.

3. Place controller in automatic.

4. Reduce PB until Constant amplitude cycling occurs.

5. Double PB for QAD. Controller is tuned.

P+I CONTROLLER

1. Increase I-time to maximum.

2. Tune as a P-Only Controller.

3. Decrease I-time until constant amplitude cycling occurs.

4. Double I-time for QAD. Controller is tuned.


CONSTANT CYCLING METHOD I _PID CONTROLLER

1. Adjust the integral time and proportional band-to high values.

2. Adjust derivative time to a very low value.

3. Reduce PB until constant amplitude cycling just occurs.

4. Double PB for quarter amplitude cycling.

5. Controller is now tuned as P-Only.

6. Increase derivative time until constant amplitude cycling occurs.

7. Cut derivative time by 1/2. (QAD)

S. Set integral time equal to derivative time.

9. Readjust slightly PB, I, and D as required to get QAD.


Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 9

Integral Windup and the Batch Switch


Integral windup and the batch switch

Integral windup may be defined as that portion of a controller output which has no effect on the movement of
the final actuator. i.e. a controller output, m, which is less than 0% or greater than 100%. Suppose we have a
pneumatic control system which operates an a signal range of 3-15 psi or an electronic system which operates
between 4-20 ma. If the output of a controller, containing integral aciibn goes below 3 psi or above 15 psi in
the pneumatic system, or below 4 ma or above 20 ma in the electronic system due to a sustained error, the
controller may be said to have wound up. In this section we want to investigate the causes of Integral windup
and what the affects are on the control loop.

Consider the equation for a P-only controller:

 100 
m= e + b
 PB 

Compare this with the equation for a P+I controller:

 100   100  1 
m= e +   ∫ e dt 
 PB   PB  I 

Notice that the integral portion of the P+I controller has replaced the bias term in the proportional controller.
Now we recognize that in a P-only controller, the actual output, me is a function of the error, to combined
with whatever the value of the bias, be happens to be. The output of the P+1 controller depends also upon the
error and bias. However in this case, the bias term is constantly changing if an error exists. We can see,
therefore that if we get an error, e, regardless of its magnitude If that error persists, the output of the controller
might well keep moving the final actuator fully open or fully closed depending on the sign of the error. We
further recognize that this movement of the final actuator under constant error condition is due to the integral
action in the control algorithm.

As long as the error exists, the controller will be trying to move the measurement back to the set point at a rate
defined by the size of the error, the Proportional band and Integral time settings. Further, upon reaching 0% or
100% output, no one has reminded the controller to stop integrating. It will therefore continue integrating
beyond 0% or 100% to the supply pressure or current. i.e. the output of a pneumatic controller may continue
integrating to 0 psi or the supply pressure. The electronic controller output may continue to 0 ma or
approximately 25-30 ma. Any of these values are outside the signal span the final actuator is prepared to see.
once the signal moves below 3 psi or above 15 psi, the final actuator is either fully open or closed. Further
movement of the controller output will have no affect. Consider what happens now, if the error begins to
diminish. For the controller to move out of this wound up state it must either do so by proportional or integral
action. If the measurement in within the proportional band of the controller it will immediately apply
proportional action to move the final actuator. However, the final actuator won’t move until the controller
output gets into its signal span of 3-15 psi. If the measurement is outside the proportional band of the control-
ler, no proportional action will occur until the measurement enters the proportional band. As we will discover
later, this is an all too common condition when a controller winds up. In any events Integral action won't begin
to occur until 1he measurement crosses the set point to change the sign of the error and begin integrating and
moving the controller output in the opposite direction at a rate defined by the size of the error, the PS and the
Integral time. This movement certainly can't be as fast as proportional action because of the nature of the
integral action and remember, it still must move the output of the controller into a range recognized by the
final actuator; The outcome of all this is that there may be an intolerable overshoot or undershoot away from
the set point while the controller is winding down into the control region.

It may be instructive to ask at this point, what can be the cause of this integral windup condition? We saw
earlier that windup occurs whenever an error persists and cannot be eliminated by the movement of the final
actuator. This may occur in one of two ways. Firsts the integral time in the controller may be set too short for
the lag characteristics of the process under control. i.e. the controller is moving the final actuator faster than its
effects can be seen at the measurement. If a load upset occurs to drive the measurement away from the not
point it causes an error which is perceived by the controller to be constant since, as the controller quickly
moves the final actuator to eliminate the error, there is no change in the error since it takes time, depending on
the dead time and capacity of the process, to have the actuator effects seen at the measurement. The controller
quickly moves the final actuator as far as it will go and the controller winds up until there is a change in the
error due to the final actuator position and then the control winds down as outlined earlier. This is
simply a case of excessive gain in the controller due to short integral time settings. The controller in this case
first winds up in one direction and then in the other. The effect is oscillation of the entire loop. This problem
is a simple one to solve. All that is required is to tune the controller correctly. i.e. set the controller
adjustments consistent-with the characteristics of the process. Remember, the integral action of the controller
can't attemp to move the measurement any faster than it is capable of moving due to process dead time and
capacity. Any faster and Integral windup occurs.

The second case where integral windup may occur is in a process which enters some non standard state, as in a
batch process.

Consider the following batch process:


Here we have a batch process which is cooking a product. The temperature sensor reads the internal
temperature and adjusts steam flow to the jacket to maintain the cooking temperature at the set point. This is a
relatively simple process since there aren't any load upsets to speak of, and any supply upsets which might
occur could be handled by a flow loop on the steam line. However consider what happens when a batch is
dumped before a new batch is placed in the kettle with the temperature controller left in automatic.

Note that while the product was cooking, the temperature was holding at the set point with the required
amount of steam shown by the controller output, m. At the end of the batch, the kettle is empty and now due
to the different heat transfer characteristic of the empty kettle, the temperature begins to drop. The controller
begins to increase the steam in an attempt to return the temperature to the set point. The temperature has
dropped to a value consistent with the contents of the kettle. Once this has happened, a constant error exists
and the controller will continue to drive its output to 100% and then go into Integral windup. On another type
of process the controller may wind up in the opposite direction. The effects of windup will most clearly be
seen once a now batch is put into the kettle.

Before we investigate this, let's consider the following. Suppose we consider a P-only controller with a
manually adjustable bias. i.e.

 100 
m= e + b
 PB 

Suppose further we set PB = 40%, b = 50%, and r = 50%. Consider what we have if c = 50%.
note that the action of the controller may be described by the action of a see-saw with the fulcrum at a point
which will allow the output of the controller, me to move 2.5 times the movement of the input, c. (since
100
GC = = 2.5).
40

Now, if c = 50% = r, then the output of the controller is the bias, b = 50%. Now, to drive the output of the
controller from 50% to 100% we require (assuming increase/decrease action in the controller) the
measurement to move from 50% to 30% (-2.5x(-20%) = + 50%) and conversely to drive the controller output
from 50% to 0%, we require the measurement to move from 50% to 70% (-2.5x(+20) = -50%). We see that if
the measurement moves from 30% to 70% (which is the 40% PB) the output of the controller will move from
100% to 0%.

Now consider what happens if we manually adjust the controller bias, b a 75%. Going through the same
analysis as above we find the following:

Notice now, that while the width of the PB is still 40%, because of the bias = 75% the output can only go
+25% to 100%g and -75% to 0%. This means the measurement can only go down to 40% and up to 80% to
drive the controller output over 01 to 100%.

Consider now if we readjust the bias = 100%.


Note now, that since the output a 100% for e n 0 the measurement cannot drop at all, since this would require
the output to increase greater than 100% and it can't do this. However, the measurement can rise by 440% to
drive the output to 0% (-2.5x + 40 = -200%). Note also the width. of the PB a 40%. Consider now what has
happened to the position of the PB with an increase in bias.

Recognize, that as the bias was adjusted upward, the proportional band also shifted upward. This means that if
the measurement drops below the set point to some error, in a controller having integral action, the bias is
automatically adjusted upward to increase the controller output as we saw previously. With this automatically
increasing bias due to integral action the PS is shifted in the same direction. This means that if the
measurement had dropped to 30% during the end of batch, and the controller had been left in auto, due to the
constant error, the PS would have been shifted upwards to be consistent with a bias of 100% (or greater when
the controller winds up). In order to get-any proportional action, the measurement would have to rise at least
to 50% to get into the PB. Remember also, that the integral action wouldn't occur (i.e. the PB wouldn't begin
shifting downwards) until the measurement crosses the set point.
In light of what we've just learned let's investigate what happens in our batch process.

While the action of the controller may not always be identical depending on PB and integral settings, the
results may be more or less the same, that is, there will be a large overshoot due to integral windup upon
initiation of the new batch.

The consideration now may be, what can be done to prevent the windup condition and thus the large
overshoot? Perhaps the simplest solution might be to place the controller in manual at the end of the batch and
manually return the temperature to its set point upon initiation of the new batch before placing the controller
in auto. This is a viable solution and is sometimes done. The disadvantage to this in that it relies on operator
intervention which may not always be faithfully executed.

Let's consider the following functional block diagram of a P+I controller:


We recognize on the front end, the summer which generates the errors, e, this then is input to the proportional
100
gain, , and is further input to another summer. Now assume we had an output; m a 50%, based an
PB
previous errors. Notice that the output is brought back to the summer thru a block which is the integral time
100 100
setting. If e = 0 then the output will remain 50%. Suppose, however, that e = 1 then this will be
PB PB
continuously added to the output and m will begin to increase at a rate governed by the integral
100
time setting. Conversely we can see if e = -1 then the output would decrease in the same manner.
PB

This is the action of a P+I controller. Suppose we now modify the controller as follows:

This may now be referred as a P+I controller with external reset or sometimes called external bias. Note first
of all, that if the connection is not made between the output m and the external bias there will be no integral
action, since integral action depends on this connection. Consider what happens if this connection is broken in
a P+I controller. Although the integral circuit is still present there will be no Integral action. In effect the
controller will be proportional only, and in the case of a constant error, with no integral action, the controller
output will not ramp up or down but will remain at a fixed value based on the PB and size of the error
(recognize that in order to have P+I action this connection must be made externally). Although the controller
output may be high due to a large error and/or small PB, the controller will not be wound up in the traditional
sense.
Recognize also, that by inputting an external signal to the external reset port we can adjust the output of the
controller to any value we desire* and in turn control the amount of-shift of the proportional band and thereby
prevent integral windup. This is the nature of the batch switch. Although we can get a P+I controller with
external reset to use as outlined above, we can also get a controller having a batch switch connected between
the output, m, and what would normally be the external reset input to use in a batch process as outlined
previously.

The batch controller, in addition to the normal P+I or PID adjustments also has two others, the batch trip point
and preload adjustments. Normally when buying a batch controller we would be required to specify a "high”
or “low” batch trip, depending on whether the output of the controller would windup to 100% or wind down
to 0% at batch end (although some electronic controllers may be used as either high or low batch). This trip in
effect is defining to the batch switch, the controller output which would be considered abnormal operation and
end of batch. When the controller reaches the batch trip point the batch switch artificially begins moving the
bias, and possibly the controller output in the opposite direction from the way they were moving prior to batch
trip. The preload adjustment controls the amount the bias is moved after batch trip. Once the preload value is
reached normal Integral action is restored to the controller.

The following is a block diagram showing the position of the batch switch.

Consider the action of the controller now with the batch switch in our loop:
Notice that with the batch switch, we get immediate control action and the large overshoot is avoided. When
the batch trip is reached, although the PB is shifted down, (to whatever the preload value is) the controller
output may very well sit at 100%. The output is hold at this value due only to proportional action. Considering
the nature of this process, i.e. a single large capacity, the PB will be set relatively narrow. This provides a high
proportional gain and thus even a modest error will drive the output to 100% or greater. The advantage here is
that once the new batch is initiated, there will be immediate proportional and integral action, since the
measurement is in the proportional band and the integral circuit has been discharged and is restored.

The setting of the preload will affect the batch action described above. Recognize that the setting of the
preload defines how far the bias and therefore the proportional band is shifted. If the preload is adjusted such
that the bottom of the proportional band is coincident with the measurement, the measurement will hold the
output of the controller at 100%. If the preload is set to some other value then measurement will end up either
outside the proportional band or somewhere in it, (depending on preload setting), if the measurement is
somewhere in the PB, the controller output will most likely change due to this proportional action. In this case
if the output drops, return to set point will not be as rapid as previously.
This setting of the preload is best accomplished empirically to arrive at the desired response back to
set point.

CASCADE CONTROL, BOOSTERS AND VALVE-POSITIONERS

Let's consider the following control loop:


This is a heat exchanger where we apply steam, FS , to heat an entering fluid, (e.g. water FW , from a
temperature T1 to an outlet temperature, T2 .

If our controller were properly adjusted, and we were to get a change in FW the change in T2 would be sensed
and sent to the controller. The temperature controller would then change its output to reposition the valve to
bring the outlet temperature back to the setpoint. This is the reason we have the control loop applied to our
beat exchanger, i.e. to guard against load upsets.

We should now consider another possible type of upset, that is, a supply upset. If we were getting our
steam, FS , from a supply header which was also servicing other users there is a possibility that as the other
user's needs varied, this would cause pressure upsets and therefore changes in FS in our own supply line.
Suppose that another user demanded more steam. This might cause a pressure drop in our line and -an
attendant drop in steam flow to our process. The only way this drop in FS could be measured, would be as a
drop in T2 , this deviation from setpoint would be sensed and compensated for as explained earlier for the load
upset. i.e. now we have two situations:

In both cases the measurement damps out with a period τ n , but for the supply upset situation, we can do better
by considering cascade control. Consider in the block diagram of our loop what the controller output is
defining to the valve. If m = 50%, it means that the valve should open to 50% and its corresponding flow. If
the valve, when open to 50% supplies the needed amount of steam the outlet temperature is at the setpoint.
The flow through the valve however, is a function of the pressure drop across it, therefore if we get a decrease
in inlet pressure, even though the valve is open 50%, the flow will decrease and we'll have to wait through the
dead time and capacitive lag of the heat exchanger before it shows up as an outlet temperature deviation and
necessary control action is taken to bring it back to the setpoint. It will damp at a period τ n back to the setpoint.
If τ n were a minute or two or even longer, the process might be in a constant state of upset and never settle T2 at
the setpoint.

The problem here lies in the fact that the controller output is defining valve opening rather than supply
requirement. Now, with a constant pressure drop across the valve, the relationship between valve position and
steam flow is constant, but if the pressure drop changes this relationship changes and we might be better off to
try to define steam requirement rather than valve position. For, as long as the valve can supply what we want,
we really don't care how far it's open, only in how much steam it's delivering.
Let's consider a steam flow loop:

Now if we have a setpoint, r = 50%, what is this saying to the flow controller? It is telling the flow controller
to make the steam flow measurement 50%. If we change the setpoint to another value we are in fact defining
to our flow control the amount of steam flow we require.

Also, suppose we have a change in supply pressure for a constant setpoint. The change in steam flow will be
sensed and applied to the controller. The controller will in turn reposition the valve to bring the steam flow to
the setpoint. The steam header pressure change is considered to be a load upset to the flow loop. How fast
will all this happen? Well the flow process (which is essentially a piece of pipe) is very fast responding since
it has very little dead time and very small capacity. It will response essentially instantaneously to either a
steam flow upset or setpoint change.

Now suppose we take and install our steam flow loop along with our temperature loop on the heat exchanger
in a cascade configuration. In this example we are cascading temperature on to flow.

What we are expecting to happen here is that in this cascade configuration, the output of the temperature
controller is now defining to the flow controller the amount of steam required to hold the measurement at the
setpoint, and if a steam supply upset occurs, the flow loop will readjust the valve very quickly to maintain the
supply constant so that the temperature loop will never even see the supply upset has occurred. Hardwarewise
we must have a flow controller that is capable of accepting a remote setpoint from the-temperature controller.

If we make a setpoint change in our temperature controller, or if a load upset ( FW ) occurs, the output of the
temperature controller will change the steam flow setpoint. But the flow loop operates so much faster than the
temperature loop that the temperature controller doesn't in fact know whether its output is going directly to a
valve or as a setpoint to another controller.

In general, the control loop closest to the controlled variable (the temperature loop in this case), is called the
primary loop. The control loop closest to the supply to the process (our flow loop.), is called the secondary
loop. Both the primary and secondary loops have their own response period, independent of whether they are
in a cascade configuration or not. We can call the response period of the primary loop τ o1 and that of the
secondary loop τ o 2 . In order for cascade control to work in minimizing supply upsets to our process, we must
be certain that

τ o1 > 4τ o 2 at least

This means that the primary loop should be at least four (4) times slower than the secondary loop. Ideally:

τ o1 10-20 ( τ o 2 )

What this really means is that the primary loop should never know that there is a secondary loop, since the
secondary loop should be able to respond as quickly as a final actuator itself. If this rule is followed then there
will be no interaction between the two loops and everything will function as intended.

A minor change in the cascade configuration will help to prevent integral windup in the primary controller:

If our primary controller has an external reset port we can tie this to the flow measurement. In -the event we
lose our steam supply, the flow measurement drops to zero or in general is not able to follow the flow
controller setpoint. This in effect breaks the integral circuit in the primary conroller and guards against the
primary controller going into integral saturation. When everything is working normally, i.e. the steam flow is
responding to the flow setpoint, normal integral action is present in the primary controller.

Starting up a Cascade System

To put a cascade system into operation:


1. Either place the primary controller in manual, or the secondary controller to local set point. This will
break the cascade and allow us to tune the secondary controller.

2. Tune the secondary controller as if it were the only control loop present.

3. Return the secondary controller to remote setpoint and/or place the primary controller in auto.

4. Now tune the primary loop normally. If the system begins to oscillate when the primary controller is
placed in auto, reduce the primary controller gain. Remember, when tuning the primary controller there
should be no interaction between the primary and secondary loops. If there is, it means that the primary
loop is not slow enough in comparison to the secondary. The primary loop shouldn't even know there is a
secondary loop.

One of the most common forms of cascade is the output of a primary controller going as a setpoint to a valve
positioner. Let's investigate this situation:

A valve positioner behaves like a controller. It is primarily used to reduce hysteresis caused by frictional
effects in a valve. Excessive hysteresis in a valve may cause limit cycling when used in a loop which has
integral action in the controller. Limit cycling is characterized by a constant amplitude oscillation of a few
percent in the controller output. It appears that the loop gain is too high, but upon lowering controller gain, we
only succeed in changing the period of the limit cycle. Sometimes, if the process is dominated by a single
large capacity, the limit cycle won't even be seen on the measurement due to the low gain of the process. The
only way to eliminate a limit cycle is to reduce hysteresis in the valve with a valve positioner.

If we send the output of a controller to a valve positioner, this signal is the desired value of the valve position.
It is like the set point to a controller. The valve positioner output goes to the valve actuator in response to the
input from the controller. If the valve doesn't move to the position specified by the controller, a mechanical
linkage connected to the valve stem, in effect provides a measurement of the stem position to the valve
positioner. The positioner will then draw on its own air supply (assuming a pneumatic actuator) to its output to
move the valve until the stem position is the same as the desired valve position signal from the controller.
Realize now that the V/P has a setpoint, manipulated variable, and measurement just like a controller, it in fact
is a valve position controller.
Valve positioners may sometimes serve to fulfill several functions. They may in fact, also be a current to
pneumatic converter as well as a positioner. i.e. they may receive a 4-20 mA signal and output a 3-15 psi
signal to the valve actuator.

Sometimes they also act as boosters or 1:1 repeaters. i.e. they provide additional air volume to improve
response of valves with large actuators.

Consider the following system:

Suppose we have a long pneumatic line driving a large valve actuator. Remember that the actuator is a large
capacity and that with each capacity is associated a time constant:

τ = RC

Where R is the resistance of the pneumatic line and is a function of the line length (l) and the 4th power of its
diameter d 4 ) and C is the capacity of the valve actuator. With a long line and a large capacity we have a long
τ . To reduce this response time, one thing we might do is increase the line diameter which will reduce R and
decrease τ assuming the length and actuator size are fixed. What in fact really has to be done is to pump more
air through the long line. If we can't do this we might consider using one of two devices, either a valve
positioner or a booster.

Using a valve-positioner, we would have:

Now we still have the same length line and same size actuator, but now the termination of the line is a small
bellows with very small capacity inside the positioner. The time constant τ = RC is now considerably smaller
since C is much reduced. The valve positioner then uses its own air supply to drive the valve actuator. The
positioner output is driving the actuator but following the input signal. The valve positioner in this case is
serving a two-fold purpose, acting as a booster while minimizing hysteresis. There are some applications
however where a valve positioner cannot be applied (e.g. a flow or liquid pressure control loop) but we still
require better dynamic response due to long lines and large actuators. In this case we can apply a booster
successfully:

A booster is nothing but an amplifier with a gain of 1. It works to improve dynamic response in the way the
V/P does, but it doesn't sense valve position and therefore does nothing to minimize any hysteresis which
might be present. A booster should be considered when desiring to improve dynamic response of a valve with
a large actuator but a V/P cannot be applied. The booster as well as the V/P are usually mounted very close top
or on the valve itself.

The valve positioner may sometime provide a secondary benefit in a control loop to offset non-linearities
introduced into the loop due to an equal percentage valve under constant pressure drop being used. Under
these conditions, the gain of the valve is a function of its stem position:

The equal percentage valve has low gain at low flow, with progressively higher gain at higher flows. while
many valve positioners have constant gain over their input range, there are some whose gain can be
characterized to offset the non-linear response of the equal percentage valve and give a response which is
essentially linear.
We will investigate later to greater depth, in what type of loop a V/P may be effectively used and how to got
around any possible restrictions which dictate not to use it.
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 10

LINKman Computer Based Kiln


Control
COMPUTER BASED KILN CONTROL
LINKman

CONTENTS

1. INTRODUCTION

2. CONVENTIONAL KILN CONTROL

3. COMPUTER BASED CONTROL SYSTEMS

4. DEVELOPMENT OF LINKMAN

5. SYSTEM COMPONENTS

6. LESSONS LEARNT DURING EXPERT SYSTEM DEVELOPMENT

6.1 Correct Motivation of Workforce


6.2 Drawing Out Local ‘Secrets’
6.3 Initial Installation

7. POTENTIAL BENEFITS

8. KILN CONTROL

8.1 Overview
8.2 Requirements of a Control Strategy

8.2.1 Plant Signals


8.2.2 Signal Processing
8.2.3 Assessment of Kiln Conditions
8.2.4 Use of Rule Blocks

8.3 Control Strategy Development


1. INTRODUCTION

From the original development of the cement making process until relatively recent
times, the control of the various unit operations involved in the manufacture of this
product has been considered to be an art rather than a science. This has been because
the condition of the process has been assessed by the eye and experience of the
individual operator. Only since roughly the late 1960’s the availability of improved
levels of instrumentation has increased the proportion of science rather than art that
is applied on a continuous basis.

There is constant emphasis within the cement manufacturing processes for the final
product to be of improved and more consistent quality at a lower overall cost. One of
the tools by which this could potentially be achieved is by increased levels of computer
based control.

Blue Circle were one of the leaders in the initial application of intelligent computer
based control systems and, in co-operation with SIRA (Scientific and Instrument
Research Association), were responsible for the development of one of the major
systems currently available to the cement industry. This system is now marketed under
the trade name of LINKman by the ABB Group,

The system has been successfully applied to all the major cement clinker making
processes:-

Slurry feed (wet)


Filter cake feed (semi-wet) to long chained kiln
LEPOL process
Dry power feed to long chained kiln
Suspension preheater
Precalciner plus preheater (tertiary air duct)

Outside of the cement industry the principles have been applied to lubricating oil and
glass manufacturing plants and to rotary kilns used for tioxide production.

Within the cement industry the majority of the intelligent systems currently installed
have been applied to kiln operation and benefits have been identified as primarily
stemming from a more stable kiln operation produced by the constant monitoring and
consequently earlier and smaller adjustments to the kiln control parameters. More
recently, systems have been extended to cement mill and clinker cooler operation.

This paper discusses the need for computer based control systems within the cement
industry, reviews the available systems and describes the historical development of the
LINKman system. The minimum hardware components of any system are detailed and
then the basic kiln control philosophy of the LINKman system is discussed as an example
of what is required within an expert system. Finally, the practical benefits are
quantified and examples are given from Blue Circle operations.
2. CONVENTIONAL KILN CONTROL

Conventional cement kiln control has required an operator to assess the internal
condition of the kiln and adjust the process inputs of raw meal (or slurry) fuel and air
in order to maintain the overall conditions inside the kiln within relatively narrow bands.
Development of instrumentation technology has allowed the improved control of these
inputs around selected set points and provided more information on what is happening
within the process, but the operator has still been required to use his experience to
change these primary set points whenever he considers such action necessary, in order
to control kiln conditions. Even where these loops are fully applied, it is frequently
apparent that almost every operator has individual ideas concerning the most suitable
targets for operational conditions and so a kiln would frequently be controlled
differently from one shift to the next. Similar comments are valid - possibly to a lesser
degree - for the other major unit operations required for the manufacture of cement.

Optimal control of the kiln requires that the system be operated at the minimum
temperature that is consistent with the production of good quality clinker (Figure 1).
Such ‘an operation will result in minimum fuel requirements (Figure 2) and later yield
power savings in the cement milling operations (Figure 3). However, when the kiln is
operated at this point, the system will be less stable and so will require much closer
monitoring and more frequent adjustment to the operating conditions. Due to the use
of naturally occurring raw materials and limitations imposed by the large scale of
operations and changing physical properties of the materials across the temperature
range that is experienced by the kiln and also caused by internal dust and chemical
cycles, the optimum heat requirement will be changing continuously. The kiln is also
a form of heat sink and when there is too little heat input to the kiln this heat sink
becomes depleted and then the kiln will cool significantly. It then becomes necessary
to operate at a reduced output for a period of time whilst the system is heated up
again. All of these factors combine to create a situation where the operator has to
react frequently to the changing conditions within the kiln.

Conversely, if the kiln is operated with a higher than optimum heat input, the operator
will have a safety margin for operation before the process becomes unstable. In
consequence less attention has to be paid to the kiln, which makes the operators task
much easier. The penalty is, of course, increased production costs which, although
important to the business, are of minimal concern to the operator in the short term.
Also an operator is only human and so cannot give each unit operation 100% attention,
even if only supervising one or two systems. Consequently, the more systems he is
responsible for, the more significant will the safety margin that he targets to develop
in order to minimise the likelihood of significant operating problems developing. Hence,
there was a need to design a computer based system to relieve the operator of the short
term supervisory and reactive tasks and let the plant be operated as close to optimum
conditions as possible in terms of production costs and consistent product quality
{Figure 4).

Before 1982 Blue Circle, in common with many other producers and equipment suppliers,
had spent much effort on trying to produce a mathematical model of a kiln in order to
bring the clinkering process under computer control. However, the nature of the kiln
CLINKER QUALITY ( % FREE LIME) AS A FUNCTION
OF KILN BURNING ZONE TEMPERATURE

1350- Region of

Raw m e a l "A"
meal ‘A’ (difficult burning)

1300- kilns) ; Raw meal ‘8’ (eaSy burning)


I I
I I
t ,
Stable ’ InCfeaJingly
’ -~.~~~~ p---C unstable kiln opcratlon
*m;-*
opcnlion * I
I I I I
0 1 3 4 5
% FREE LIME

BLUE CIRCLE EXPERIMENTAL WORKS


KILN TRIAL, MAY 1981
Typical NO, /Free Lime Relationship
600 1
I
I
500 1

I
400 I

I
300 1

L 1 I f I , I I I ? s I I I I I I
0123456789 10 11 14 0
12 13 15 16 17 18 19 20 21
Timr(hrt)
KILN FUEL CONSUMPTION vs KILN BACK END NOx - HOPE 1986

950

900

A FUEL = 7.5% I
I
v 1
MANUAL;1986

850
COMPUTER;1986

NORMAL RANGE
I I
COMPUTER TARGET
I I
I I
I
8oa I I I
2000 2500
500 1000 1500 .
KILN DAILY AVERAGE BACK END NO, LEVEL (ppm)
EFFECT OF SOFTER BURNT CLIWR ON CEMENT QUA1 ITY
)tND GRINDING ENERGY REQUIREMENTS
QPE WORKS NOVEMBER 1986

Soft burnt: NO x 1350


4oc A = CONSTANT MILLING ENERGY Free lime 1.7
SOFT BURNT +lO% SURFACE AREA
THEORETICAL INCREASE IN 28 DAY STRENGTHS +3.5%
ACTUAL INCREASE IN 28 DAY STRENGTHS +7%

B= CONSTANT SURFACE AREA


SOFT BURNT 15% REDUCTION Hard burnt: NOx 1800 ppm
IN GRINDING ENERGY REQUIREMENT Free lime 0.6
Cm CONSTANT 28 DAY STRENGTH
SOFT BURNT REQUIRES LESS THAN
70% OF ENERGY REQUIREMENT
OF THE NORMAL CLINKER

300

vI 250
34 38 42 46 50 54 58
kWh/tonne
Fiqure 4. The Aim of High Level Control

Manual Cycle
BZT /

I : Better Control

Final
Set Point

2: Controlled to
Process Unstable Below This Level Lower Setpoint
m Time
In the mid 1970’s Professor L A Zadeh suggested that where it is impossible to study a process,
an alternative approach would be to observe the actions of the experienced specialist - the kiln
operator in our case - and attempt to mimic his responses using a less precise or “Fuzzy” logic.
This is the principle on which expert systems operate.

3. COMPUTER BASED CONTROL SYSTEMS

The computer based control systems that are currently on the market or have been marketed in
recent years are summarised in Table 1 together with an indication of the depth of known
experience, based on total system sales up to 1993 and the number of installations that the
supplier anticipates having commissioned by the end of 1995. This is based on data supplied to
the CETIC organisation in late 1993. This data clearly suggests that the two major suppliers of
Expert systems are currently FLS and ABB LINKman. By 1995, at least three major cement
producers, BCI, Holderbank and Votarantin (of Brazil) had adopted LINKman as the standard
computer control package for their plants. ABB LINKman are gaining the majority of the retrofit
market, whilst the main suppliers (FLS, Polysius, KHD) install their proprietary systems as part
of an overall plant package. The following should also be noted:-

a) In 1994, FLS bought up Toptools and it is possible that this product will be
phased out from the market place.

b) The initial systems offered by both Polysius and KHD were simple improved
versions of extended loop control systems rather than true expert systems. Both
companies brought real expert systems onto the market during 1993 and true
expert system installations up to the end of 1993 for these companies are very
limited and are indicated in brackets in Table 1.

FLS and ABB LlNKman now both have second generation control systems on the market. These
primarily use larger computer systems and utilise the extra capacity to make the system more user
friendly. FLS refer to the second generation as FUZZY TWO and this has replaced FUZZY in
the market place. ABB LINKman continue to market both versions of their system at present;
the original version is called CLASSIC and the new version is called GRAPHIC. Both LINKman
systems control the plant using identical strategies, but are programmed in different ways:
CLASSIC uses a standard programming language whilst GRAPHIC has a flow chart/pictorial
approach.
TABLE 1

AVAILABLE COMPUTER CONTROL SYSTEMS

ORDERED
SYSTEM SUPPLIER UNITS INSTALLED BY
1990 1993 1995
Fuzzy FLS 86 118 138’
Fuzzy 2
Toptools Technodes
.4 Ciment Francais 6 37 40
KCS 21 37 40
KCES Polysius (0) (6)
Pyroexpert 7 10 17
(0) (2)
Comdale Comdale 0 1
Scap SCAP SA 2 13 13
Lisa/Lucie LaFarge 0 10
LINKman ABB LINKman 24 59 110
Classic & Graphic
Nihon Cement No Data
FCB No Data

Data supplied in late 1993.


1995 data refers to anticipated installations at end 1995 based on orders to end 1993.

The Lafarge LISE systems is an internally developed ‘expert system which is


currently being applied within Lafarge sites and is not available outside the
company at present.
Few details are available concerning the Comdale or Nihon systems, although the
Comdale system is know to be operating on an 800 TPD long dry process kiln in
North America.

Scap, like the original Polysius and Humbolt systems, is not a true expert system,
but rather uses the maximum power of modern process controllers in association
with extra computing power to give improved process control with a significant
degree of automatic optimisation/self learning.
In addition to LINKman, BCC have also developed a further ‘homemade’ control system
at Cauldon - precalciner kiln - where the spare computing power available within the
DCS (distributed control system) supplied by H&B is used to give a control system.
Many of the basic strategies developed for LINKman are also applied in this system.
At Cauldon, the system works well, but attempts to replicate it on the H&B system at
Dunbar have met with only limited success.

4. DEVELOPMENT OF LINKMAN

In 1981/2 Blue Circle carried out a review of the clinker making process in order to
establish the potential benefits of achieving effective automatic kiln control and to’
identify the best method of pursuing these potential benefits, if a ‘best method’ did
indeed exist. By comparing ‘the best achieved performance’ of its kilns with the
‘normal’ performance and by assessing the alternative methods of control available to
diminish the difference between these two levels of operation, Blue Circle identified
the following relevant factors:-

a) The potential savings were sufficiently large to justify a substantially


increased resource allocation to the purpose.

b) The system most likely to improve kiln control to the desired level would
be a real time expert system using a rule based control strategy.

cl Since a fully suitable system did not exist at that time, Blue Circle would
have to develop its own.

d) Because of the energy saving potential, financial support might be


available and - subsequently was obtained - from the UK Department of
Energy.

Based on the above factors, Blue Circle divided the development programme into three
areas:-

i) The use of novel instrumentation.

ii) The production of a kiln control strategy.

iii) The selection of a user friendly computer system.

In the instrumentation field three items were selected as the signals most likely to give
significant benefits in terms of process control. These were:-

- No, analysis of kiln gases to indicate the conditions within the kiln.
- SO, analysis of kiln gases to indicate combustion conditions.

l On line free lime analysis to indicate product quality.

- On line particle size analysis for use in cement mill optimisation.

The first of these was quickly shown to be a valuable indication of the internal condition
within the kiln and is now accepted as a major control signal within the cement industry
as a whole. Whilst some effort was also put into effective analysis of SO2 at the kiln
back-end and use of this signal as a kiln control parameter, at that time it was
considered that the available gas sampling systems were not suitable for reliable
monitoring of this highly reactive gas component. Whilst various possibilities were
investigated in a bid to realise successful application of the other two items, no reliable
results were obtained at that time. A number of equipment suppliers have continued
with ,development since that time and it is hoped that suitable units with long term
reliability will become available in the near future.

The production of the kiln control strategy took place internally within Blue Circle
Research Division, initially with the co-operation of Hope Works. The basic strategy
will be discussed more fully in Section 8, but the initial decision was that it should be
based on a rule based expert system structure. The best publication available
concerning kiln burning is ‘The Rotary Cement Kiln’ by Peray. This includes, as an
Appendix, a listing of what he considers to be the 27 major rules of kiln burning. As
will be realised later by those who have seen this book, these rules bear a significant
resemblance to the rules which apply within any kiln burning expert system.

The selection of a suitable and user friendly computer system was probably the most
difficult part of the development programme, as this was outside the area of expertise
of the BCI support staff. Therefore, this part of the project was contracted out to an
external group: The Scientific Instrument Research Association (SIRA). This group was
given the brief to develop a computer system which:-

a) Would stand up to the environmental conditions within a typical cement


industry control room.

b) At the surface level, would be user friendly.

c) Would have sufficient computing power to perform the tasks deemed


necessary to control the cement kiln in real time.

d) Could also perform a number of data processing and/or logging tasks.

e) Would allow control strategy changes with minimum, and preferably zero,
interference with its operation in controlling a kiln.
It was accepted that this would require SIRA to prepare the system programmes in an
accepted programming language (such as C). Also a surface level programming language
would have to be developed as a user friendly interface for the process engineer to
prepare the actual kiln control programmes, with a further simple interface for the
operator. The most suitable hardware could not be selected until these systems had
been fully developed and so a development programme of two to three years was
anticipated for the full, user friendly system. Within this timescale, it was anticipated
that the control strategies would have been developed and become available for testing
and so SIRA were also commissioned to prepare development programmes from proposed
strategies for testing on less user friendly, but available computer systems within
proprietary DCS units.

Once this work programme had been defined, BCI applied for, and received, grant
assistance from the UK Department of Energy for two projects based on the energy
saving potential of each. The first project involved the testing and development of
novel instrumentation for the cement industry (NOx, free lime and psd monitors) whilst
the second project covered the development of the computer control system. Both
grants were awarded because of the potential for energy saving, but novel nature and
hence risk element of the projects.

The initial work was the development of an understanding of the potential use of the
NOx signal in kiln control and the production of control strategies. This took place
between 1982 and 1985, the practical studies mainly taking place at Hope, but with
some work involving the kilns at Barnstone and Cauldon. In early 1985, the initial
installation of a kiln control strategy took place on No. 2 kiln a t Hope using the
computing power available within a Kent Systems P4000 distributed control system.
This exercise confirmed that the BCI approach had major potential, but emphasised the
lack of user friendliness of the system, especially for use in a development project.
Nevertheless, this system remained on line for over a year, achieving approximately
80% runtime overall. The operating results of this system were assessed by an
independent organisation appointed by the DOE (W S Atkins1 and the conclusions of this
audit are presented in Table 2.
TABLE 2

RESULTS OF WS ATKINS AUDIT OF HLC AT HOPE

A) Long Term Data (+ Six Months)

mm

B) Medium Term Data (+ Two Weeks)


On Control Off Control
mv sd mv sd
Feed Rate 139.1 8.3 136.6 12.7
Coal Feed Rate 46.35 2.4 48.12 3.07
NOx Level 1391 354 1529 467
BE02 2.35 0.39 2.23 0.51
Drive Amps 48.9 6.5 57.4 7.8
Cooler Exhaust
Temperature 244.6 42.4 249.2 56.4

At the end of 1985, the first real LINKman system, although this name was not to be
established for a further two years, was installed on No. 6 kiln at Aberthaw; again a
suspension preheater kiln. In early 1986, the first non-dry process application was
installed at Northfleet on No. 2 kiln initially using the Kent P4000 DCS, similar to that
at Hope which was available in the control room. However, the longer time delays
inherent in the long chained kiln served to emphasise how difficult the Kent based
system was to use. Consequently, in the second half of the year both Hope and
Northfleet were converted to full LINKman type systems. Between 1986 and 1991
LINKman was installed on the majority of the BCC plants. In 1990 the first BCI
overseas installation took place at Ravena in the USA, with Atlanta following almost
immediately. In the following years, BCI installations took place at Lichtenburg (South
Africa) and at all the other Blue Circle Inc plants in the USA. The BCI installation
dates are presented in tabular form in Table 3.

Although the early development work progressed within Blue Circle, simply
concentrating on this does not present the full picture. Much interest was shown in the
BCI work by CETIC, the French based European cement manufacturers technical forum.
As a consequence of this, a system was installed at Obourg, in Belgium, in 1987 and
Ciment Francais and Ciment Lafarge - the latter following a LINKman installation at
their Le Havre Works - were encouraged to develop their own systems (later to become
TABLE 3

DEVELOPMENT OF BCI LINKMAN SYSTEMS

1982 Research Project


1985 Hope (Kent Based)
Aberthaw
1986 Northfleet (Kent Based)
Hope
Northfleet
Westbury
1987 Cookstown
1988 Masons

1989 Westbury
Aberthaw
1990 Swanscombe
Ravena
Atlanta
1991 Plymstock
Lichtemburg (SP
1992 Lichtenburg (Precal)
Harleyville
1993 Roberta
1994 Tulsa

Toptools and LISA respectively). Although BCI were considering marketing the system
it became apparent that there would be significant resistance to a system marketed by
another producer. As a consequence of this and the UK recession of the late 1980’s, it
was decided that LINKman had gone as far as it was likely to go within the Blue Circle
group. Consequently, LINKman was moved outside the Blue Circle organisation with
marketing rights passing to SIRA. From 1989 to 1991, the BCI based installations
continued to expand as previously indicated, whilst a limited number of other systems
were completed as a result of a licensing agreement with the Fuller Company. This
agreement was terminated in mid 1991 when Fuller were taken over by FLS.

At this point there were two established systems on the market (Fuzzy logic by FLS and
LINKman, with Toptools just appearing). The Holderbank group were reviewing these
and other experimental systems with the intention of recommending one system to their
experimental systems with the intention of recommending one system to their associated
companies world wide. Soon after the termination of the licensing agreement between LINKman
and Fullers, ABB took out a licensing agreement with LINKman and LINKman Systems Ltd
began to formulate plans for the development of the second generation of LINKman control
systems. The intention behind the proposed second generation system was to maintain the
existing control philosophy, but to improve further the degree of user-friendliness of the system.
As this system developed it was decided to market both systems; the original system as LINKman
CLASSIC with the new system as LINKman GRAPHIC. The new system would use a suite of
programmes known as “G2” as its base platform which would require a considerable increase in
computing power and as such would be more expensive.

The initial experimental GRAPHIC systems were installed at two Holderbank sites in Europe in
1992, with the third system at the BCI site of Harleyville in the USA. BCI now has graphic
systems installed at Harleyville, Roberta and Tulsa - all in the USA.

In late 1991, Holderbank decided in favour of LINKman as their preferred computer based
control system. In mid-1992 the LINKman organisation was taken over from SIRA by ABB and
since that time the LINKman organisation has expanded significantly; up to the end of 1990 a
total of 24 systems had been installed (three of these being non-cement industry) whilst in 1993
alone over 20 systems were installed. Table 4 summarises the known installations up to the end
of 1993.

TABLE 4

LINKMAN INSTALLATIONS UP TO END 1993

48 SYSTEMS OPERATING 59 KILNS


Kilns
Process Total BCI

Wet 1 2 7
Filter Cake 2 2
Lepol 5 1
Long Dry 7 4
Suspension Preheater 21 5
Precalciner 12 1
Total: 59 20
5. SYSTEM COMPONENTS

Any computer control system needs to be able to read process signals, process these
mathematically, decide what action to take, inform the plant supervisors what it is
doing, allow the supervisor to make an input when necessary and, preferably, but not
essentially, supply relevant hard copy reports. The LINKman system is made up of the
following five different hardware items as shown in Figure 5:-

a) A plant interface.
b) A computer.
c) Keyboards.
d) Monitor screens.
e) Printers.

The plant interface receives signals from the plant process control equipment and
converts them into a form that the computer can receive and identify. Depending on
the type of equipment available, the interface can be either a protocol to talk directly
to the controllers and the signal monitors/network, or an interface board with its own
protocol. The former is supplier and often type specific, whilst the latter allows a
wider range of equipment to be processed, but is likely to have access to a smaller
number of signals.

The computer receives selected data, processes it to assess the internal condition of the
kiln (or any other item of process equipment) uses the software to decide on the set
. point changes that are necessary to improve kiln conditions and, as required, sends these
changes back to the process controllers. This will be discussed in greater depth in
Section 8.

The monitor screens are used by the system to communicate with the operators. A
number of displays will be prepared during the commissioning of the system to provide
all the necessary information to allow the operator to see what the computer considers
to be the internal condition of the kiln and what it intends to do to improve the kiln or
to optimise production and why it has made this decision. These displays will normally
be updated every sixty seconds, but other time scales can be selected if necessary. As
a number of displays exist, the operator has to decide which he wishes to see and this
is done from a series of menus. These displays are available as data tabulations or as
graphs, with typical examples being shown in Figures 6,7 and 8. When LINKman is not
controlling the kiln, the programmes will still operate and the displays will indicate
what the control action would do, so allowing initial review before coming on-line. A
standard system would normally consist of two monitors for a one kiln unit, or three
monitors for a two kiln unit although some consideration needs to be given to what
other unit operations are to be controlled.

The keyboards are used by the operator to speak to the computer. In this way he can
select which displays he wishes to see or adjust set points or feed rates. CLASSIC uses
standard alpha/numeric keyboards, whilst GRAPHIC utilises a mouse and numeric pad
h
tt
0 a
1
1
0
m
l.=.-.--..-...:
a)
.-c
u)
1 <1
$
z
i
16
Fiqure 6 SIMPLE OPERATORS DISPLAY

K2(ON) KILN2 1 SSEP-1990 08-35-31

TOPFEED m a x nox brkset

. . . . . . . . . . . . . . . . ..-...............-....-......-....-.........-..........--............-.-........-..-....-.............-...............-..........
SUMMARY PAGE LINKMAN CHANGE
. . . . . . ..~......-.~....~............................~......~.~~...........~.~....~.~~.~. Pv grad
FRESHFEEDSP = 90.0 tpll 0.00 nox = 1941 ppm -0.34
COAL SP = 6.02 tph -0.0 OXY = 2.01 % 0.13
DAMPER SP = 34.1 % -9.2 wt = 1555degf 0.19
KILN SPEED SP = 90.00 rph 0.00 a m p s = 25 amps 0.25

nox sp = 1200 ppm


CLINKER = 41.40 tph 0.00 oxy sp= 1.5 %
cgt sp= 1551 degf

. .._........-..-.---....-...----......-............-...-...-..-...-.-...-.-..-...-.-..-

estimtd clink = 992 tpd


average feed = 89.1 tph
control on 26.0 hrs
.
TOTAL FUEL USED = 3.454 mmBTU/ton 3.607 ACTION IN 2 MIN

Last update: 19SEP-1990 08:35:25


Figure 7 DETAILED ENGINEER DISPLAY

K2 (ON) DETAIL 26SEP-1990 15:50:59

EXTREME nox soak- brkset


adapt on
btuttph = 3.99 flimesp = 0.400 topspeed = 92.00 topfeed = 90.0 topfan = 42
tempfdtp = 90.0 flime = 0.7 spdtarg = 88.18 cgtav = 1572 noxstart = 1320
cntco = 1399 noxav = 1327 speed = 86.30 feedav = 90.0 noxtarg = 1250
cntbrk = 69 Isf = 0.0 hotnum = 0 fanav = 37.7 nxminmax = - 1.50
refcgt =: 1562 waterav = 16.67 dspeed = 0.00 coalav = 6.52 cltrunc = 2.22
feedlrph = 1.000 fanpast = -1.8 pastactn = -4.43 feedpst = - 2.21 coalpst = 2.22

VAR: VAL NORM GRAD VLD LO SP HI VHI WHI


NOX: 968 -1.50 +0.05 ; / 875 1062 1250 1625 1875 2312
02: 1.21 -0.58 ~-0.20 0.50 1.00 1.50 2.00 2.50 3.00
co: 103.3 0.00 0.00 0.0 0.0 102.9 500.0 1000 1500
CGT 1572 0.78 0.21 1537 1550 1562 1575 1587 1600
AMPS: 23.7 -0.33 0.18 19.6 22.1 24.5 27.0 29.4 31.9
WTR: 14.8 -0.37 -0.43 6.7 11.7 16.7 21.7 26.7 31.7
ACTION IN 5 MINUTES prog 18.4
CONTROL BZT = - 0.67 CONTROLOX = -0.50 CONTROL CGT = 0.13 (norm) hron 23.5
feedsp (tph) = 86.3 coalsp (tph) = 6.68 dampsp (%) = 36.4 (ens) avprg 18.2
FEED CHANGE = 0.00 COALCHANGE = - 0.00 DAMP CHANGE = 0.17 (ens) 15:50:18

Last update: 26SEP-1990 15:50:58


Area .“ LORIMER
Page : Mfl IN FIGURE 8 EXAMPLE OF GRAPHICAL DISPLAY FROM LINKMAN CLASSIC
E, 12 a o
:)(:1. (n

RabJNoxK2
6s1 .3
so~) -(J
i 1 (:)(:).(:J

----------- ---.-- ----- -------- ------- ------ KILNAMFS’


1007.
‘ml:) .0

& (:)(>
. (:)

---------------------- ------- --.-- ------ RAWBETK2


!5’3&.1
57CI . (:)
~.~

RAW02K2
2.0
1.5

Em . 0

------- -------- ------ - ,------ ------- --------- - --’%


,-----
.
‘T
NilJsTK2
E,5 ,!5
s O . (:1
1 (:)!5. (:)

--------.------------ ------ -------.- ------ -- ------- ------- ------- .. ------ ------- ---------- ------ stJusTK2
105.0
7.’5.0

.21.0

------------ - ------ ------ ------ --- ------- ---- ------- ------- ------- -- -----
J’1.l
COALSPK2
19.2
ls. o .
‘3(]. (:J
\
---------------------- 2.
------------------ ----- ------- ------- ------- - --------- ------ ------ --- .-----, [IAMPSP
l—
74.3
~,(:). (:)
for the operator and a full keyboard for the supervising engineer to use whilst modifying
programmes.

The printers are used to produce hard copy versions of various process logs and graphs.
Normally, these can either be produced on demand or produced automatically at a
selected time on a daily basis.

6. LESSONS LEARNT DURING THE EXPERT SYSTEM DEVELOPMENT

6.1 Correct Motivation of the Workforce

The attitudes of the workforce at any given site, from site manager down to kiln
operators, will ‘make or break’ any control system. Initially, reactions to the proposal
to install computer control tend to vary, but if not handled correctly managers will use
it as a scapegoat for any problem on the site, operators will consider it to be a threat
to job security and maintenance staff can feel that an unfair burden is being placed on
them to ensure improved signal reliability.

The solution to this within Blue Circle has been to have full presentations and open
discussions with Work’s staff in advance of a proposed installation, to ensure that each
site has a local champion for the project and to maintain technical support from the
Corporate technical centre for a significant period after installation. The abandonment
of any one of these three has significant deleterious effect on the overall system
performance, due to a gradual loss in confidence by the operators. On-going
development of the operator’s understanding of the control principles is essential, as is
a response to any concerns that develop. Blue Circle has always stressed the autopilot
nature of the expert system and never let it be forgotten that the human operator must
be prepared to judge the units performance and overrule, it in extreme circumstances.
The best results are obtained from man and machine acting in harmony, although manual
intervention should be occasional rather than regular. Where manual intervention is
occurring frequently either the control programme needs to be improved or the operator
does not understand how the programme operates and so makes actions needlessly. The
latter situation is likely to end up ‘confusing’ the computer and may cause the kiln to
cycle.

The underlying justification must always be that the Works on which the unit is being
installed are convinced that it can help and so are fully committed t o ensuring that it
does help to improve output and quality.

6.2 Drawing Out Local ‘Secrets’

It is commonly accepted within the cement industry that no two kilns behave in an
identical way. Blue Circle have found it vital to involve the operators at an early stage
to draw out from them the particular variances in behaviour of their kilns. This, has
been done using pre-prepared forms, but is best completed in informal discussions prior
to and during the commissioning phases, allowing the project engineer to include
relevant strategies in the control programmes at an early stage.

It is worth noting that in the early development stages of LINKman, the Blue Circle
engineers prepared different control programmes for each type of process and on
occasions significantly tailored these programmes individually for similar kiln systems.
As more experience has been gained on all types of process, the control prog-rammes
have become more and more similar. Today, on most sites, almost 90% of the control
programme is common to any process, with many of the differences arising from the
observed performance of the available instrumentation. This means that it is now easy
to turn on the computer control strategies confidently for the first time and achieve an
acceptable degree of control. A significant degree of tuning however, may be required
at some sites. This helps to develop quickly a reasonable level of operator confidence
in the new system.

It is normally found that prior to installation, all operators have major doubts about the
system. The ‘better’ and ‘worst’ operators quickly become very supportive, the former
because they quickly see the strengths of the systems, the latter because they soon
realise that it reacts better than they do. The major problem is often found with the
‘average’ operator who has difficulty in developing the necessary degree of confidence
in the system and so often ‘helps out’. It is normally these operators who require the
greatest degree of training in order to get the most out of the system. Where possible
the site’s most experienced operator should become a part of the optimisation team.

6.3 Initial Installation

The immediate response of the control system when it is first installed normally appears
promising, but commonly after a period of operation (anywhere from two to ten days)
some problems in control appear to arise. These are normally site specific and relate
to the performance of the existing process equipment. PID control loops associated
with ancillary equipment may need to be retuned or regularly occurring factors may
become apparent, sometimes necessitating mechanical repair or changes to operating
procedures.

One further vital difference between human and expert system operation is the size of
the control increment that is applied. Whilst the operator will generally wait until
sufficient deviation has occurred to justify a substantial move on the relevant control
element, the success of the expert system is based on its sensing the need for change
at an earlier time and consequently making a smaller adjustment to the controls. The
significance of this is that the amount of hysteresis in the control chain that may be
acceptable to the human operator (although he may have preferred an improvement)
become totally unacceptable to the expert system since it may need to make several
corrective increments before overcoming the hysteresis backlash and actually achieving
an adjustment. This has lead to BCI almost universally adopting the provision of a
dedicated feedback loop where this does not previously exist for any control parameter
which we wish to adjust. A particularly ‘good’ example of this is the kiln speed control;
where a normal pony motor driven speed adjustment may have a typical backlash of 2%
of speed and a typical increment of speed applied by the expert system can be as low
as 0.1% of speed.

7. POTENTIAL BENEFITS

The purpose of a high level control system is to provide operating cost savings. In
general, within BCI the benefits have been as follows:-

- Fuel consumption reduced by between 0 and 5%

- Clinker production increased by between 0 and 5%

l Refractory life improved by between 5 and 25%

- Kiln exit NOx level reduced by between 10 and 60%

There is also, in some cases, evidence of improvements in the, consistency of


clinker/cement quality and improved clinker grindability.

The typical results from the Blue Circle installations ‘up to the end of 1991 are
summarised in Table 5 and those from fifteen Holderbank installations, reported at the
1994 IEEE Conference in Table 6.
TABLE 5

SUMMARY OF MAJOR BENEFITS OF CEMENT BASED HIGH LEVEL CONTROL

TYPICAL RANGE BEST ACHIEVED


. Standard fuel consumption is
substantially reduced -2.5% to 5% -10%

. Cllnker outputs can be increased over and


above the equivalent to the reduced
standard fuel consumption
+2.5% t o +5% +10%

. Product quality is significantly improved


and clinker grindability reduced +2.5% to 5% +10%
. Milling costs are reduced in line with
the proved product quality and reduced
grindabillty -7.5% to -15% -30%

. Peak and average refractory temperatures


and associated cyclic thermal stresses,
are reduced -50°c t o -100 C -200 C
. Refractory life is increased “Best” 30% plus

. Kiln exit NOx levels with respect to both


pre-LINKman and pre-NOx monitoring
periods are reduced -25% -50%
. Running times are improved 80% 90%

IN ADDITION

. Kiln specific knowledge concerning both the process and process dynamics is greatly enhanced.

. Improved working practices can be developed.

. High level control superlmposes a consistent approach to control and eliminates the normal shift
variations.

. The system offers a powerful management data collection and logging facility,
. High level control opens up an opportunity for management to better manage the process and its
operation.

TABLE 6

OVERALL RESULTS OF APPLICATION OF


HLC IN HOLDERBANK GROUP

Improvement in Clinker Uniformity 0 to 30%


Savings in Energy Consumption 0 to 3%
Increase in Production 0 to 5%
Savings in Refractory Consumption 0 to 30%
Reduction in NOx Emissions 0 to 20%
Average Savings in Energy:
Modern Plant 1.5%
Old Plant 3.9%
Average Increases in Production:
Modern Plant 1.0%
Old Plant 3.1%
It must be emphasised that the BCI data refers to periods of about a year after initial
installation. After this time, in a number of cases, performance has tended to fall off
over a number of months. Normally where this has occurred, performance has been
improved again relatively quickly following short periods of programme re-tuning,
operator re-training or plant tuning. As an example of this BCI data on system usage
in recent years is presented in Table 7.

Further examples of typical system operation are presented in Tables 8,9 and Figure 9.
Table 8 sets out information concerning the control parameters on the three sites,
whilst Table 9 gives an indication of the degree of control achieved on the NOx and
CGT at Atlanta and Ravena. These two sites represent very successful installations
with high system usage and steadily improving kiln operation. This does not mean that
the systems can be ignored. In early 1991, system utilisation at Atlanta dropped to 68
and 5 5 % for kilns 1 and 2 respectively. Following a two week period of review and

tuning, the system performance improved to such a degree that over the following six
month period the high level control system was in operation for 94 and 87% of kiln run
time for the two kilns with operator overrides to the meal feed, coal feed or fan
averaging approximately one every day and a half.

TABLE 7

BCI UTILISATION OF LINKMAN IN 1992/3

LINKman Utilisation as Percent of


Kiln Run Time

Site 1992 1993 to May

A 61 to 66 4 5 to 48
B 65 to 70 50
C 52 to 63 7 4 to 78
D 89.4 89.4
E 64 8 5 to 90
F 73 to 79 51.7
G 79.5 No Data
H 76 to 78 No Data
I 85 to90 8 8 to 95
J No Data No Data
K N o t Installed 9 0 to 95
TABLE 8

EXAMPLES OF HLC-1

Atlanta Ravena 1 F1

Installed 1990 1990 1989


Control Parameters NOx, 02, CGT, NOx, 02, Amps, NOx, 02, Amps,
co, Amps BET, CGT, CO ST3T, CO
Control Items Meal, Coal, Fan, Coal, Fan, Meal
K Speed K Speed Meal, Coal
LINKman use 85 to 95% 85 to 90% 50 to 60%
Operator Intervention:

Overrides 1 to Z/day ~1 per shift 10 to 15/day


Set Points S/Shift , Occasional Frequent

Site F is an installation where more problems have been encountered and can be used
as an example of loss of operator confidence. In the first two months of 1990, the
system was in use for just under 60% of kiln run time. A short training/tuning period
followed and usage rose to 80% with an average of 81% being maintained over the,
following ten months. In August 1991, there was a period of four days with 100%
operation without any operator intervention and a period of ten days with over 90%
utilisation with an average of one override per shift. This ended when in one shift the
operator kept the system on-line throughout his shift, but made 58 overrides. The
operator who followed on from this reported that he had to take LINKman off-line for
the first two hours of his shift, as the kiln was badly unstable. After two hours the
system was put back on line and no operator overrides took place for the remainder of
that shift or the following shift.

Figure 9 presents data from a site where the operators complained that LINKman was
not operating successfully. The system was switched off and run manually for about
three weeks. As can be seen from Figure 9 a wider spread of NOx and BET values over
the manual operation period suggests poorer control at this time.
NOX at Atlanta and Ravena
Distance from se! Point

100
80 1
= Atlanta Kiln 1
60
I!!?z
Ravena kiln 1
40
M Ravena kiln 2
20 4
0
100 300 500
200 400 600
Range from Set Point (PPm

Chain Gas Temperature Values ~


Distance from Set Point

100

80
IEIAtlanta Kiln 1 i
60
= Ravena kiln 1
40
EElRavena kiln 2
20

0
5 10 15 20 25 30
Range from Set Point (deg F)

TABLE 9
Figure 9

MASONS 1993- NOX


comparison of con and coff
100)

90I

80I

70

60

50

40

30

20

10

O*
995 975 395 375 495 475 595 575 695 675 795 775 895
mid point range
coff periods (2185)
cOnperiods(17g2)
/l\
v

MASONS 1993- BET


comparison of con - and
---- fmff -“. .

90

80

70

60

50

40

30

20

10

0
I I I I I ,
1 I I I I i
187.5 192.5 197.5 202.55207 .521;.5 217.5 222.5
227.5 232.5 237.5 242.5
mid point range
27
coff periods (2185) con periods (1792)
/l\
v
8. KILN CONTROL

8.1 Overview

The only requirement would seem to be to keep a constant feed rate of constant
composition raw meal into the top of the preheater, burn at constant rate a constant
composition coal with sufficient air for proper combustion, keeping the kiln at a fixed
speed for it to produce a good quality clinker at a constant rate. This statement is
true, the sole problem being the second word ‘only’ and Table 10 shows some of the
more common reasons for kiln instabilities. In practice, the kiln operator has to control
a kiln which is in an almost continual state of instability by alteration of the inputs
available to him:-

a Meal feed rate.


b Fuel feed rate.
c ID fan setting.
d Kiln speed.

The effects of changes in these inputs is summarised in Table 11. Further alterations
will also be required to associated ancillary equipment, such as raw mill, kiln dust
return and clinker cooler, which will also affect the operating conditions within the kiln.
TABLE 10

SOME REASONS WHY KILN DOES NOT REMAIN IN OPTIMUM


BURNING CONDITION

1. Slurry chemical composition changes

2. Slurry physical composition (residue) changes

3. Slurry moisture content changes

4. Slurry flowrate to kiln changes

5. Coal chemical composition changes

6. Coal ash content changes

7. Coal moisture content changes

8. Coal physical composition changes (residue)

9. Coal flow rate varies

10. Heat loss from kiln changes (e.g. rain on shell)

11. Amount of in-leaking air changes (e.g. inlet seal gap


changes, outlet seal gas changes, clinker ring builds,
mill ring builds)

12. Kiln speed changes

13. Coating falls away from kiln lining

14. Bricks spa11 or wear

15. Production of dust in kiln changes

16. Flow of air through kiln changes, e.g. fan blades coat
with dust

17. Temperature of secondary air changes, e.g. clinker size


change - waste cooler gas flow rate changes, cooler
chamber fan air changes, bed depth in cooler changes -
amount of air leaking from cooler chamber changes
TABLE 11

HOW ARE BZT, 02 AND BET CONTROLLED

‘THERE ARE ONLY 4 INDEPENDENT CONTROL PARAMETERS

i.e. COAL, FEED, DAMPER & SPEED 1


WHAT EFFECT DO THESE HAVE ON THE PROCESS?

1. + ve COAL change gives -ve 02 (combustion)


+ ve BZT (later due to thermal inertia)
+ ve BET (more heat in kiln)
2. + ve FEED change gives - ve 02 (decarbonation)
- ve BZT (heat absorbed by meal)
- ve BET (heat absorbed by meal)
+ ve DAMPER change gives + ve 02 (more air)
- ve BZT (lower flame temperature)
(heat shifts from BZ to BE) + ve BET (poorer heat ‘transfer to feed)
KILN SPEED GOVERNS FEED RESIDENCE TIME

Decrease speed for a low BZT


Raise speed when feeding kiln (constant degree of fill)
Generally speed proportional to feed

The operator has to look at the information available to him, decide what is happening within the
kiln and, if necessary, alter one or more inputs in order to optimise the performance of the kiln.
The process signals which may be available to him are summarised in Table 12 and from these he
is likely to apply a rule of thumb, such as:-

The kiln temperatures appear too high at the front of the kiln and the oxygen looks to be
OK.

b) Then I will increase the feed a bit and maybe decrease the fuel slightly. ~

This statement defines a temperature parameter and a gas parameter, but the estimation of high
and OK and the size of the changes will be operator specific or even dependent on the mood of
the operator on a particular day. Nevertheless, taking these t w o parameters, it is possible to
observe an operator in action and define a set of rules which indicate his average action for a
combination of conditions (for instance; HIGH, OK, LOW) for each parameter and an example
two parameters, it is possible to observe an operator in action and define a set of rules
which indicate his average action for a combination of conditions (for instance; HIGH,
OK, LOW) for each parameter and an example of this is set out in Table 13. This

TABLE 12

AVAILABLE PROCESS SIGNALS

a) KILN SYSTEM

Kiln gas analysis: 02, NOx, CO, SO2


Burning zone temperature Pyrometer
Kiln drive power
Mid kiln temperature
Kiln back end temperature
Preheater gas analysis: 02, NOx, CO
Preheater temperatures
Preheater suctions
Ex. precip. gas analysis: 02, NOx, CO, SO2
b) COOLER

Under-grate pressures
Grate speed
Secondary air temperature
Clinker exit temperature
Waste gas temperature
Plate temperatures
Cooler fans % of full capacity

particular rule block has two inputs (kiln temperature and oxygen level) and three
conditions for each and so the total number of actions which have to be defined, as
shown in Table 13, are:-
3x3=9

TABLE 13

EXAMPLE OF TWO INPUT/TWO OUTPUT RULEBLOCK

If kiln temp high and 02 high Then +2 tonne-s feed and 0 fuel
If kiln temp high and 02 OK Then + 1 tonne feed and -0.25 tonnes fuel
If kiln temp high and 0, low Then 0 feed and -0.5 tonnes fuel
If kiln temp OK and O2 high Then +0.5 tonnes feed and +0.2 tonnes fuel
If kiln temp OK and 02 OK Then 0 feed and 0 fuel
If kiln temp OK and 02 low Then -0.5 tonnes feed and 0 fuel
If kiln temp lowand 02 high Then 0 feed and 0.5 tonnes fuel
If kiln temp low and 02 OK Then -1 tonne feed and +0.25 tonnes fuel
If kiln temp low and 02 low Then -2 tonnes feed and 0 fuel
This could be expanded to consider VERY HIGH and VERY LOW levels as well, in which
case this 5 by 5 rule block would need 25 actions to be defined,, or a 7 by 7 rule block
would need 49 actions defining.

Although the experienced operator may not have a rule block firmly fixed in his mind
this is the process he goes through and this must be mimicked in the EXPERT system.

The original system at Hope Works estimated the burning zone temperature and the
oxygen level (two parameters) and used a 7 x 7 rule block to decide on the action size.
Very quickly, however, it was decided that for optimum control three parameters had
to be considered in assessing kiln condition (front-end temperature, back-end
temperature and oxygen level). To use three parameters with a 7 condition rule block
would mean that 343 actions would have to be defined. As this is impractical, BCI (and
LINKman) have standardised on a three input, three parameter (HIGH, OK, LOW) rule
block which requires 27 actions to be defined.

Having established the thought process of the kiln operator, it is now necessary to
transform this into a process that can be reliably followed and improved upon by a
computer. At this point, it should be emphasised that the LINKman system is a
computer based process control unit which is intended to improve kiln control by
adjusting the process input set points (meal and fuel feed rates ID fan and kiln speed)
over a relatively short timescale, under the supervision of the operator. As such, it can
be seen to be an autopilot performing the majority of the routine kiln optimisation tasks
within the limitations imposed by the available instrumentation. The kiln and computer
remain under the supervision of the operator who must maintain an up to date
awareness of the process conditions. The operator is able to adjust set points, make
additional changes to the process inputs or take LINKman off-line if he considers such
action necessary, but in order to react correctly he needs to understand the principles
by which LINKman will react. When LINKman and the operator react as a team the
results have always been good and, consequently training of, and feedback from, the
operators is an essential part of installing a LINKman system.

8.2 Requirements of a Control Strategy

Any control strategy must be able to achieve a number of separate objectives, namely:-

i) Read plant signals and convert them into stable signals.

ii) Convert the stable signal into a form to which the computer can put
meaning.

iii) Use these to estimate the conditions within the process.

iv) Decide on the extent of the process changes necessary to optirnise plant
performance.
v) As necessary, modify the process inputs in order to drive the process
towards the required state.

Item iv) is largely derived from the rule block approach, but the other objectives are
equally important in the development of a reliable working strategy and each will be
discussed in the following sections.

8.2.1 Plant Signals

The plant signals that are used by the computer are the same signals that are on display
to the operator in the conventional process control equipment. The most important
signals for the LINKman system are those used to assess kiln conditions, as previously
indicated in Table 12. The first requirement is to decide whether a signal can be
trusted. This can be done in two ways:-

a) Where two measurements are taken of a similar signal, for instance kiln
exit and preheater exit NOx signal, the two signals can be continually
compared. When there is a significant deviation in the relationship
between the two signals, then either the computer can sound an alarm and
require the operator to tell it which signal to use, or the computer can
automatically select whichever signal it believes to be correct. Both
approaches are used on various BCI sites.

b) Where only one measurement exists, define value boundaries outside of


which it is thought probable that the signal is false. If these values are
exceeded, then LINKman will either move to an alternative signal or
control approach, or if no alternative exists, sound an alarm and turn
itself off-line as it does not have sufficient reliable information to make
a sensible decision on what action to take.

Having established that the signals are reliable, the instantaneous signal will normally
be too variable for use in deciding control actions, as the ‘noise’ and short term signal
spikes would cause the control strategy to overreact. Consequently, the raw signal is
averaged in order to remove the short term signal nose. The time period over which
each individual signal is smoothed can be selected or adjusted through the programming
level by the supervising engineer at any time and in general should be kept as short as
practical, but must be sufficient to eliminate all the short term process signal variation.
This means that the signal used by the computer at any moment will be different from
the instantaneous live value and so the signal trend used by the computer will be slightly
behind that of the live signal. Normally, this will not be significant, but on occasions
the smoothing constant will have to be adjusted to counter a change in the efficiency
of part of the plant (for instance a problem within the clinker cooler). Very
occasionally, where a severe kiln cycle has developed, the effectiveness of the kiln
control strategy can be reduced and it then becomes necessary to take the system off-
line. A programme modification will then normally prevent this recurring.
8.2.2 Signal Processing

For use within the control strategy further processing of the plant signals is necessary
so that the computer can understand how far away an averaged value is from the target
value and how serious this may be. This processing is called NORMALISATION and
involves defining a number of key points for the plant signal that is to be normalised.

The first key point is the target level, or set point. When the signal is at this level it
is given an equivalent value of zero. Next levels below the set point at which the signal
is considered to be low (equivaIent to a normalised value of -1) very low (-2) and very,
very low (-3) are selected, together with values about the set point at which the signal
is considered to be high (+l), very high (+2) and very, very high (+3). Using this method
a relationship is set up by which any value of a process signal can be described by a
number between -3 and +3 where:-

0 (zero) indicates that the process signal is at the set point.


A negative number indicates that it is below the set point.
A positive number indicates that it is above the set point.
The larger the number, the further it is away from the set point.

It is not possible for a normalised number to exceed + or -3.

An example of NOx normalisation is shown in Figure 10.

Rather than define these key points as absolute numbers, they are expressed either as
percentages of the set point, or as set point plus or minus selected amounts, as shown
in Table 14a and 14b. This means that when a set point is changed the normalisation
key points will also change automatically, maintaining the set relationship (Figure 11).
The normalisation relationships can be set to produce the required definition of kiln
conditions as shown in Figure 12. The normalisation relationship in use for a particular
signal can be redefined instantly, but such action should be taken with care as it would
modify the entire control strategy. An example of changing the normalisation
parameters is also shown in Figure 12.

The gradient of the process signal is also used within the control strategy and this is
normalised in a similar way to again give a normalised signal whose value would also
vary between -3 and +3. For instance a gradient on the smoothed NOx signal equivalent
to a reduction in NOx level of 5ppm per minute could be defined as a normalised value
of -1, but again the larger the value, the greater the rate of change.
Figure 10 Normalisation
using NOx a s an example

Normalised value
‘Figure 11 Normalisation -1
Effect of Change to Set Point

2000

I
t
8
I
I

0
-3 -2 -1 2 3
Normaiised value
Figure 12 Normalisation -2
Effect of Change to Parameters

-3 -2 -1 0 1 2 3
Normalised Value

w normalisation
Within each LINKman system, the following signal values are normally displayed for the
operators information.-

Averaged (smoothed) signal.


Normalised value of the averaged signal.
Normalised value of the gradient.
Set point.
Key points for -2, -1, +1 and +2 values.

TABLE 14 (a)

NOx NORMALISATION

i 1. Extremely low Very low Lou Sat Point High Very High Extremely High

2. -3 -2 -1 0 +1 +2 +3

3. 1000

4. 70 80 90 100 115 130 145

5. 700 800 900 1000 1150 1300 1450

Notes 1 . Define key points


2. Normalised values
3. Choose set point (ppm NGx)
4. Decide values of key points as % of set points
5. NOx values of key points = % = SP
100

TABLE 14 (b)

NOx NORMALISATION

1. Extremely low Very low Low Sat Point High Very High Extremely High

2. -3 -2 -1 0 +1 +2 +3

3. 2.5

4. -1.5 -1.0 -0.5 0 +0.8 +1.6 +2.4

5. 1.0 1.5 2.0 2.5 3.3 4.1 4.9

Notes: l-3 as Table 1


4. Decide values of key points (set point +- number)
5. Value of oxygen at key points
8.2.3 Assessment of Kiln Conditions

Once the process signals have been converted into a form that the computer can
understand (NORMALISED) it is necessary to develop a method by which the computer
can assess the internal condition of the kiln. A computer ‘thinks’ in terms of numbers
and so three equations are normally developed which indicate kiln stability. The first
assesses the temperature conditions in the burning zone, whilst the second considers the
available oxygen and the third equation estimates the thermal condition at or near the
back-end of the kiln. These equations are referred to as the control functions - burning
zone function, oxygen function and back-end function - and are used as the input to the
rule blocks. These functions can be defined in such a way that the values of each will
either fall between -1 and +l or -3 and +3, although the standard format is the former
with:-

-1 being equivalent to the LOW condition.


0 being equivalent to the OK condition.
+l being equivalent to the HIGH condition.

so as in the normalisation procedures a negative value indicates a low or cold condition,


whilst a positive value indicates a high or hot condition.

The equation that is used to calculate the basic function can utilise any normalised
signal that is considered to give useful information concerning the condition of the area
under consideration. As an example of this the burning zone function on all BCI works
uses the NOx signal as a major component, but on some sites the kiln drive amps and/or
burning zone pyrometer signal are also used as inputs. It is also possible to have more
than one definition of the function, each of which can be used under defined conditions.
The correct function can be automatically selected by the control programme, or
selected by the supervisor.

A typical burning zone control function is as follows:-

F u n c t i o n (BZT) = k l * normalised Nox


+k2 * n o r m a l i s e d NOx g r a d i e n t
+k3 * normalised Amps
+k4 * normalised Amps gradient

where kl, k2, k3 and k4 are all different constants, that can be adjusted in the
programming mode in order to adjust the significance of each part of the
calculation. It is normal for kl + k2 = 1.0 and k3 + k4 = 1.0.

Considered individually, the significance of the various parts of this function are as
follows:-

a) NORMALISED NOx: This is an indication of how far away the computer


averaged NOx signal is from the set point and whether the front end of
the kiln is hot or cold. As explained earlier, this will be slightly behind
the live plant signal.

b) NORMALISED NOx GRADIENT: This is an indication of the direction in


which the NOx signal is trending and hence on whether the kiln is
warming or cooling. A cold kiln that is warming is obviously a less serious
situation than a cold kiln that is still cooling.

c) NORMALISED AMPS: Amps give an indication of the stickiness of the


material in the kiln. In general, the stickier the material, the hotter the
kiln, but this signal covers the condition of a larger portion of the kiln.
It is however, affected by changes in kiln speed breakaway of coating and
a number of mechanical constraints and so must be treated with care.
Where this signal is used the set point will normally be derived from a
long term average of the raw signal.

d) NORMALISED AMPS GRADIENT: As with the NOx gradient this gives an


indication of whether the kiln is warming or cooling. Where the change
has started from the rear of the kiln this signal will normally8 give an
earlier indication than the NOx gradient.

8.2.4 Use of Rule Blocks

Most of the actions that the computer will make are based on a series of simple rules
of thumb and hence these rules make up the heart of the LINKman system. These can
be thought of as basically being the operator’s rule of thumb transformed into a form
that the computer can understand. Where the operator may think:-

“the kiln is very hot, but the oxygen is a little low, let’s put some feed on and
take some coal off”

the computer will see:

BZT function high, OX function a little low; ACTION: Feed on, coal off

In this way a number of rules can be built up which describe the recommended actions
at a number of defined conditions. The size of these actions at these points were
initially decided through consultation with staff and operators at individual sites and a
typical example for a two input (BZT and OX) two output (feed and fuel) was shown in
Table 13. Further experience at a number of sites showed that the rule block couId be
simplified to give outputs as a percentage of a maximum value. This meant that a
ruleblock could be used at any site with the size of the general actions being determined
by an easily adjustable scaling factor in the course of the commissioning period. This
revised ruleblock is then shown in Table 15.
TABLE 15

EXAMPLE OF TWO INPUT/TWO OUTPUT RULEBLOCK

If kiln temp hip& and 02 bi& Then +lOO% feed and 0% fuel
If kiln temp M and O2 OK Then +50% feed and -40% fuel
If kiln temp ~~JIJ and 0, & Then 0% feed and -80% fuel
If kiln temp m and O2 m Then +40% feed and +30% fuel
If kiln temp Q& and 02 OK Then 0% feed and 0% fuel
If kiln temp a and 02 )ow Then -10% feed and -10% fuel
If kiln temp low and 0, )I&& Then 0% feed and +lOO% fuel
If kiln temp & and 02 OK Then -100% feed and +25% fuel
If kiln temp @J and 02 & Then -75% feed and 0% fuel

As can be seen, the computer has now been told what to do at a number of specified
points, such as BZT = +l , OX = 0, or BZT = + 1, OX = +l . In almost all cases the values
of BZT, OX and BET in all cases now, will not fall exactly on the integer values, but
will fall between the specified points. For example the actual values might be:-

0.67 for BZT function


0.4 for OX function
-0.1 for BET function

In this case the computer system would consider the eight most relevant rules. In the
above case these would be:-

BZT = +l o x = +l BET = 0
BZT = +1 OX = +l BET = 1
BZT=+l O X = 0 BET=0
BZT=+l O X = 0 BET=1
B Z T = 0 OX=+1 BET=0
BZT = 0 OX = +l BET = 1
BZT = 0 OX= 0 BET = 0
BZT = 0 OX = 0 BET = 1

The actual values can be seen to fall within a three dimensional box bounded by the BZT
values of 0 and 1 on one surface, the OX values of 0 and 1 on the second surface and
the BET values of 0 and 1 on the third surface. The programmes then calculate how
close the actual values are to each of the defined corner points set out above and
applies a portion of the action defined for each of these rules/corner points. The closer
the values of BZT, OX and BET are to a particular point, the greater is the proportion
of this particular recommended action that will apply. For ease, an example of this for
a two input, two output ruleblock is shown in Figure 13, whilst Figures 14 and 15 show
Fig 13 Changes from rule block
Simple 2 in/2 out ruleblock

I1

0.5 i\ 1’
/
I
i \

-0.5

-1

-1.5 i

-1.5 -1 -0.5 0 0.5 1 1.5


BZT condition
Control Surfaces - 1
The Rules can be represented Two dimensionally
to form a Contour Map.

The Rules specify 9 points on the Contour Map


LINKman calculates the feed & coal changes between rules

High
+60

OK

BZT FEED

OK High
Control Surfaces - 2
The Rules can be represented Two dimensionally
to form a Contour Map.

The Rules specify 9 points on the Contour Map


LINKman calculates the feed & coal changes between rules

High

OK

BZT COAL

OK High
how these can be built in control surfaces for each output. For a three input system the
same principles apply, but the control surface becomes three dimensional with the BET
function becoming the z axis.

The output from the ruleblock exists as a percentage figure for each output to the
plant. This then has to be changed into an actual new set point. As indicated earlier
a scaling factor exists for each component of the ruleblock output to modify the output
into a realistic value. It has become the general practise to relate this value to the
average value of the output, hence the new set point of, for instance, the fuel feed rate
will be:-

Current fuel rate = (ruleblock coa1 output x fuel scaling factor x coalav/lOO)

As a further example of this, if the coal set point is 38% and the average coal set point
value was 40% and the coal scaling factor was 0.04, then for a ruleblock fuel change
output of 33%, the new set point that would be sent out to plant would be:-

38 + (33 x 0.04 x 40/100) = 38.52

The appropriate raw meal and damper changes are calculated in a similar fashion, but
using different scaling factors.

Ideally, changes in kiln speed will be ratioed to changes in the meal feed rate.

It has been found that by relating changes to average input rates in this way, the
general control concepts transfer from one kiln to another with greater ease.

8.3 Control Strategy

Each of the requirements set out at the start of Section 8.2 have now been reviewed and
by combining these routines a programme is produced which will:-

Read the process signals.


Check that process signals are usable.
Average the signals to eliminate spikes.
Normalise the signals to a form the programme understands.
Calculate the control functions.
Feed the control functions into the ruleblock.
Extract the control functions into the ruleblock.
Convert this into recommended changes to the feeder and fan settings.
Send the new settings out to the plant.
These actions form the basis of the control package and an example of the control logic
is set out in Table 16, but will not, by themselves, give effective control. The effective
control programme must also take account of the following factors:-

i) On what timescale is an action required. The control programme


normally runs once a minute, but if an action was made on every run
there would not have been time for the previous action to have an effect.
Therefore, an action frequency needs to be decided. In general, the
greater the time lag of the process, the lower the action frequency should
be. In consequence the action frequency of a precalciner kiln may be
every three minutes, whilst that of a preheater kiln may be every five
minutes. A long dry kiln may perform an action every eight minutes,
whilst a long wet may be anywhere between ten and fifteen minutes.

TABLE 16

SIMPLE CONTROL PROGRAMME

1. Start programme

2. Check instruments

3. Process raw signals

4. Calculate normalised values

5. Calculate control functions

6. From ruleblock, calculate % changes

7. Output to plant

8. Output to displays

ii) Are ‘special’ conditions existing which require an immediate non-ruleblock


action. A number of general situations have been identified which apply
to most sites and on occasions a site specific situation can be identified.

iii) Should the ruleblock recommend action be modified in the light of other
recent actions.

iv) Is the recommended new set point reasonable. If a recommended set


point is beyond the acceptable minimum or maximum values, then the
recommended action has to be changed to take account of this. This is
mainly accounted for by having more than one ruleblock. The ruleblocks
typically used are as follows:-

General: modifies feed, fuel, fan and kiln speed.

Topfeed: used when at the maximum required feed rate. Modifies


fuel and fan only.

Topf an: Used when at the maximum available fan level. Modifies
fuel, feed and kiln speed.

Stable: When at Topfeed and all the control functions indicate that
the kiln is close to its ideal state, modifies fuel only.

At some sites, further rule blocks exist to cover the situation where the
firing system can be max’d out.

v) Are the selected target values for NOx, oxygen and back-end temperature
realistic for the optimum operation of the kiln. Are measurements
available which will allow these targets to be automatically modified to
give improved targets. The answer will normally be ‘yes’ although the

measurements may be site specific. Examples are:-

clinker free lime


clinker litre weight
crystal size (ONO method)
clinker SO3 level
clinker alkali level
kiln or system gas exit SO3 level
kiln or system gas exit CO level
raw meal chemistry (LSF or SR)

When all these extra considerations have been taken into considerations the working
strategy becomes significantly more complex, as set out in Table 17, but at last we have
a potentially effective strategy. All that is now required is for it to be tuned and, as
previously indicated, to be believed in by all the relevant works personnel.
TABLE 17

FULL CONTROL PROGRAMME

1. Start programme

2. Check if operator override or set point


change has been requested

3. Check instruments

4. If instruments u/s turn off line or alarm

5. Check if set point needs to be changed

6. Process raw signals

7. Calculate normalised values

8. Is ' a special action required now? If so


calculate action and skip to 12

9. Calculate control functions

10. Select most appropriate ruleblock

11. From ruleblock, calculate % changes

12. Is recommended output sensible? If not


modify appropriately

13. Scale output for individual process

14. Is action required now

15. Output to displays

16. Store and calculate data needed for next


programme run
TABLE 5 (a)

Notes

1. Extremely low Very low LOW Set Point High Very High Extremely High

2. -3 -2 -1 0 +l +2 +3

3. 1000

4. 80 90 100 115 130 145

900 1000 1150 1300


-

Notes: 1. Define key points


2. Normalised values
Choose set point (ppn NOx)
Decide values of key points as X of set points
NOx values of key points - % = SP
100

TABLE 5 (b)

NOx NORMALISATION

Notes

1. Extremely low Very low low Set Point High Very High Extremely High

2. -3 -2 -1 0 +1 +2 +3

3. 2.5

4. -0.5 0 +0.8 +1.6 +2.4

5. 2.0 4.1 4.9

Notes: l-3 as Table 1


4. Decide values of key points (set point +- number)
5. Value of oxygen at key points
2.3 Assessment of Kiln Conditions

Once the process signals have been converted into a form that the computer can
understand (NORMALISED) it is necessary to develop a method by which the computer
can assess the internal condition of the kiln. A computer ‘thinks’ in terms of numbers
and so three equations are normally developed which indicate kiln stability The first
assesses the temperature conditions in the burning zone, whilst the second considers
the available oxygen and the third equation estimates the thermal condition at or near
the back-end of the kiln. These equations are referred to as the control functions -
burning zone function, oxygen function and back-end function - and are used as the
input to the rule blocks. These functions can be defined in such a way that the values
of each will either fall between -1 and +1 or -3 and +3, although the standard format is
the former with:-

-1 being equivalent to the LOW condition.


0 being equivalent to the OK condition.
+1 being equivalent to the HIGH condition.

so as in the normalisation procedures a negative value indicates a low or cold condition,


whilst a positive value indicates a high or hot condition.

The equation that is used to calculate the basic function can utilise any normalised
signal that is considered to give useful information concerning the condition of the area
under consideration. As an example of this the burning zone function on all BCI works
uses the NOx signal as a major component, but on some sites the kiln drive amps
and/or burning zone pyrometer signal are also used as inputs. It is also possible to
have more than one definition of the function, each of which can be used under defined
conditions. The correct function can be automatically selected by the control
programme, or selected by the supervisor.

A typical burning zone control function is as follows:-

Function (BZT) = kl * normalised Nox


+k2 * normalised NOx gradient
+k3 * normalised Amps
+k4 * normalised Amps gradient

where kl, k2, k3 and k4 are all different constants, that can be adjusted in the
programming mode in order to adjust the significance of each part of the
calculation. It is normal for kl + k2 = 1.0 and k3 + K 3== 1 .O.

Considered individually, the significance of the various parts of this function are as
follows:-
a) NORMALISED NOx: This is an indication of how far away the computer
averaged NOx signal is from the set point and whether the front end of the kiln
is hot or cold. As explained earlier, this will be slightly behind the live plant
signal.

b) NORMALISED NOx GRADIENT: This is an indication of the direction in which


the NOx signal is trending and hence an whether the kiln is warming or cooling.
A cold kiln that is warming is obviously a less serious situation than a cold kiln
that is still cooling.

c) NORMALISED AMPS: Amps give an indication of the stickiness of the material


in the kiln. In general, the stickier the material, the hotter the kiln, but this signal
covers the condition of a larger portion of the kiln. It is however, affected by
changes in kiln speed breakaway of coating and a number of mechanical
constraints and so must be treated with care. Where this signal is used the set
point will normally be derived from a long term average of the raw signal.

d) NORMALISED AMPS GRADIENT: As with the NOx gradient this gives an


indication of whether the kiln is warming or cooling. Where the change has
started from the rear of the kiln this signal will normally give an earlier indication
than the NOx gradient.

2.4 Use of Rule Blocks

Most of the actions that the computer will make are based on a series of simple rules
of thumb and hence these rules make up the heart of the LINKman system. These can
be thought of as basically being the operator’s rule of thumb transformed into a farm
that the computer can understand. Where the operator may think:-

"the kiln is very hot, but the oxygen is a little low, let’s put some feed on and take
some coal off"

the computer will see:

BZT function high, OX function a little low; ACTION: Feed on, coal off

In this way a number of rules can be built up which describe the recommended actions
at a number of defined conditions. The size of these actions at these points were
initially decided through consultation with staff and operators at individual sites and a
typical example far a two input (BZT and OX) two output (feed and fuel) was shown in
Table 13. Further experience at a number of sites showed that the rule block could be
simplified to give outputs as a percentage of a maximum value. This meant that a
ruleblack could be used at any site with the size of the general actions being
determined by an easily adjustable scaling factor in the course of the commissioning
period. This revised ruleblock is then shown in Table 6.
TABLE

EXAMPLE OF TWO INPUT/TWO OUTPUT RULEBLOCK

If kiln temp @& and O2 l&& Then +lOO% feed and 0% fuel
If kiln temp j&& and O2 m Then +50% feed and -40% fuel
If kiln temp j&h and 0, & Then 0% feed and -80% fuel
If kiln temp 1Qts. and O2 m Then +40% feed and +30% fuel
If kiln temp Q$ and O2 Q& Then 0% feed and 0% fuel
If kiln temp W and O2 b Then -10% feed and -10% fuel
If kiln temp ~JJY and O2 high Then 0% feed and +IOO% fuel
If kiln temp &y and O2 Q& Then -100% feed and +25% fuel
If kiln temp l@& and O2 @ Then -75% feed and 0% fuel

As can be seen, the computer has now been told what to do at a number of specified
points, such as BZT = +1 , OX = 0, or BZT = +1 OX = +1 In almost all cases the
values of BZT, OX and BET in all cases now, will not fall exactly on the integer values,
but will fall between the specified points. For example the actual values might be:-

0.67 for BZT function


0.4 for OX function
-0.1 for BET function

In this case the computer system would consider the eight most relevant rules. In the
above case these would be:-

BZT=+1 ox=+1 BET=0


BZT=+1 ox=+1 BET= 1
BZT=+1 ox= 0 BET=0
BZT=+1 ox= 0 BET=1
BZT= 0 ox=+1 BET=0
BZT= 0 ox=+1 BET=1
BZT= 0 ox= 0 BET=0
BZT= 0 ox= 0 BET= 1

The actual values can be seen to fall within a three dimensional box bounded by the
BZT values of 0 and I on one surface, the OX values of 0 and 1 on the second surface
and the BET values of 0 and 1 on the third surface. The programmes then calculate
how close the actual values are to each of the defined comer points set out above and
applies a portion of the action defined for each of these rules/comer points. The closer
the values of BZT, OX and BET are to a particular point, the greater is the proportion
of this particular recommended action that will apply. For ease, an example of this for
a two input, two output ruleblock is shown in Figure 4, whilst Figures 5 and 6 show how
these can be built in control surfaces for each output. For a three input system the
same principles apply, but the control surface becomes three dimensional with the BET
function becoming the z axis.

The output from the ruleblock exists as a percentage figure for each output to the plant.
This then has to be changed into an actual new set point. As indicated earlier a scaling
factor exists for each component of the ruleblock output to modify the output into a
realistic value. It has become the general practise to relate this value to the average
value of the output, hence the new set point of, for instance, the fuel feed rate will be:-

Current fuel rate = (ruleblock coal output x fuel scaling factor x coalav/l00)

As a further example of this, if the coal set point is 38% and the average coal set point
value was 40% and the coal scaling factor was 0.04, then for a ruleblock fuel change
output of 33%, the new set point that would be sent out to plant would be:-

38 + (33 x 0.04 x 40/100) = 38.52

The appropriate raw meal and damper changes are calculated in a similar fashion, but
using different scaling factors.

Ideally, changes in kiln speed will be ratioed to changes in the meal feed rate.

It has been found that by relating changes to average input rates in this way, the
general control concepts transfer from one kiln to another with greater ease.

3 Control Strateqy

Each of the requirements set out at the start of Section 2 have now been reviewed and
by combining these routines a programme is produced which will:-

Read the process signals.


Check that process signals are usable.
Average the signals to eliminate spikes.
Normalise the signals to a form the programme understands.
Calculate the control functions.
Feed the control functions into the ruleblock.
Extract the control functions into the ruleblock.
Convert this into recommended changes to the feeder and fan settings.
Send the new settings out to the plant.
These actions form the basis of the control package and an example of the control logic
is set out in Table 7, but will not, by themselves, give effective control. The effective
control programme must also take account of the following factors:-

i) On what timescale is an action required. The control programme normally runs


once a minute, but if an action was made on every run there would not have
been time for the previous action to have an effect. Therefore, an action
frequency needs to be decided. In general, the greater the time lag of the
process, the lower the action frequency should be. In consequence the action
frequency of a precalciner kiln may be every three minutes, whilst that of a
preheater kiln may be every five minutes. A long dry kiln may perform an action
every eight minutes, whilst a long wet may be anywhere between ten and fifteen
minutes.

TABLE7

1. Start programme

2. Check instruments

3. Process raw signals

4. Calculate normalised values

5. Calculate control functions

6. From ruleblock, calculate % changes

7. Output to plant

ii) Are ‘special’ conditions existing which require an immediate non-ruleblock action.
A number of general situations have been identified which apply to most sites
and on occasions a site specific situation can be identified. These actions are
totally independent of the rule blocks and are triggered as soon as predefined
conditions are met. When a special action is triggered the rule blocks are by-
passed.
iii) Should the ruleblock recommend action be modified in the light of other recent
actions.

iv) If the actual value of a signal is outside of the range covered by the rule block,
can the proposed action be modified to take account of this.

v) Is the recommended new set point reasonable. If a recommended set point is


beyond the acceptable minimum or maximum values, then the recommended
action has to be changed to take account of this. This is mainly accounted for
by having more than one ruleblock. The ruleblocks typically used are as
follows:-

General: modifies feed, fuel, fan and kiln speed.

Topfeed: used when at the maximum required feed rate. Modifies


fuel and fan only.

Topfan: Used when at the maximum available fan level. Modifies


fuel, feed and kiln speed.

Stable: When at Topfeed and all the control functions indicate that
the kiln is close to its ideal state, modifies fuel only.

At some sites, further rule blocks exist to cover the situation where the firing
system can be max’d out.

vi) Are the selected target values for NOx, oxygen and back-end temperature
realistic for the optimum operation of the kiln. Are measurements available
which will allow these targets to be automatically modified to give improved
targets. The answer will normally be ‘yes’ although the measurements may be
site specific. Examples are:-

- clinker free lime


- clinker litre weight
- crystal size ONO method)
- clinker SO3 level
- clinker alkali level
- kiln or system gas exit SO3 level
l kiln or system gas exit CO level
l raw meal chemistry (LSF or SR)

When all these extra considerations have been taken into considerations the working
strategy becomes significantly more complex, as set out in Table 8, but at last we have
a potentially effective strategy. All that is now required is for it to be tuned and, as
previously indicated, to be believed in by all the relevant works personnel.
TABLE 8

FULL CONTROL PROGRAMME

Check if operator override or set point


change has been requested

Check instruments

If instruments u/s turn off line or alarm

Check if set point needs to be changed

Process raw signals

Calculate normalised values

Is a special action required now? If so


calculate action and skip to 12

Calculate control functions

Select most appropriate ruleblock

From ruleblock, calculate % changes

Is recommended output sensible? If not


modify appropriately

Scale output for individual process

Is action required now

Store and calculate data needed for next


[Figure 1 Normalisation
I using NOx as an example I

2000 F

1500

0) 1000
2
9

500
Figure 2 Normalisation -1
I
I Effect of Change to Set Point

0
-1 0
Normalised value

LmOriainaI set Point --aModified set point i /


F i g u r e 3 Normalisation - 2
Effect of Change to Parameters

iI
2000 I

Z
z 1500 i

3 1000
Z
L

$
! I
i

i i

500

0 I
-3 -2 -1 0 1 2 3
Normalised Value

moriginal normaiisation ONew normalisation


I
Fig 4 Changes from rule Block
Simple 2 in/2 out ruleblock

I 8

-0.5 0 0.5 1
BZT condition
Figure 5

Control Surfaces - 1
The Rules can be represented Two dimensionally
to form a Contour Map.
.
The Rules specify 9 points on the Contour Map
LINKman calculates the feed & coal changes between rules

High +60

OK

FEED

OK High
Control Surfaces - 2
The Rules can be represented Two dimensionally
to form a Contour Map.

The Rules specify 9 points on the Contour Map


LINKman calculates the feed & coal changes between rules

High

OK +40

COAL
=

Hlgh
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

Module 6

Section 11

Principles of Process Control


PRINCIPLES OF PROCESS CONTROL
An introduction to Process Control Engineering in the cement industry

CONTENTS

1 Introduction

2 Reading Process Control Flowsheets

3 Physical Implementation

4 Automatic Control

5 Actuators & Drives

6 Sensors

7 Analysers

8 Future Trends in Process Control

8.1 Ensuring Compliance with Safety, Health & Environmental Standards


8.2 Stabilising Plant Operation
8.3 Making the Operator's Job Easier
8.4 Assisting with Process Optimisation
8.5 Making Quality Control Easier
8.6 Making the Manager's Job Easier
8.7 Making Plant Maintenance Easier
Principles of Process Control

1 Introduction

This paper gives a brief introduction to process control engineering in the cement
industry, aimed at technical personnel who are not specialists in the electrical or
process control engineering fields.

Automation and instrumentation are fast developing areas that are critical for the
competitiveness of a cement plant. However, there are certain basic principles that
remain unchanged with time, such as safety, reliability, accuracy and maintainability.

The following subjects are discussed:

q What do the symbols on a process control flowsheet mean?

q How is the design on the flowsheet physically implemented?

q What are the benefits of automation?

q What is PID control?

q How does the output from a control loop regulate the process?

q What is the current practice with sensors and analysers?

q Where is process control going?

2 Reading Process Control Flowsheets

The process control flowsheet is the starting point for understanding the monitoring
and control of the plant. It is important to understand the terminology, some of which
is commonly confused.

Quite often reference will be made to the Process Flowsheet. Whilst this is similar to
the Process Control Flowsheet, it only gives a schematic view of the process, together
with essential process data such as flowrates and machinery ratings, etc. The Process
Control Flowsheet gives a similar schematic view of the plant but with symbols
showing the different types of measurement and control employed. It is also often
referred to as the P+I (Process and Instrumentation) Diagram. Commonly two
symbolic standards are used on the flowsheets. These are:

v ANSI/ISA-S5.1–1994

v BS 1646 : Part 1 : 1979 / ISO 3511/1 1977

1
Principles of Process Control

TYPICAL DCS PROCESS AND INSTRUMENTATION


TAG NUMBERS FOR FLOW SHEETS

TYPICAL DCS TAG NUMBER FQR 1025

SECOND LETTER
MODIFIER
Q = TOTALISE

FIRST LETTER
MEASURED THIRD LETTER
VARIABLE OUTPUT
F = FLOW FUNCTION
R = RECORDER
FQR

1025

OTHER TYPICAL DCS TAG NUMBERS

TEMPERATURE RECORDER
TRAH
ALARM – HIGH
2145

SIAL SPEED INDICATOR


ALARM – HIGH
256

TIC
TEMPERATURE INDICATOR CONTROLLER
2110

2
Principles of Process Control

BS 1646: Part 1: 1979


ISO 3511/1 1997

ISO Letter codes for identification of instrument functions

First letter 1) Succeeding letter 1)


Measured or initiating variable Modifier Display or output function
A Alarm
B
C Controlling
D Density Difference
E All electrical variables 2)
F Flow rate Ratio
G Gauging, position or length
H Hand (manually initiated) operated
I Indicating
J Scan
K Time or time programme
L Level
M Moisture or humidity
N User's choice 3)
O User's choice 3)
P Pressure or vacuum
Q Quality 2)
For example Analysis,
Concentration, Integrate or totalise Integrating or summating
Conductivity
R Nuclear radiation Recording
S Speed or frequency Switching
T Temperature Transmitting
U Multivariable 4)
V Viscosity
W Weight or force
X Unclassified variables 3)
Y User's choice 3)
Z Emergency or safety acting

1) Upper case letters shall be used for the measured or initiating variables and succeeding letters for display or output
function. Upper case letters are preferred for modifiers, but lower case letters may be used if this facilitates understanding.
2) A note shall be added to specify the property measured.
3) Where a user has a requirement for measured or initiating variables to which letters have not been allocated and which are
required for repetitive use on a particular contract, the letters allocated to User’s Choice may be used provided that they are
identified or defined for a particular measured or initiating variable and reserved for that variable. Where a user has a
requirement for a measured or initiating variable that may be used either once or to a limited extent, the letter X may be used
provided that it is suitably identified or defined.
4) The letter U may be used instead of a series of first letters where a multiplicity of inputs representing dissimilar variables
feed into a single unit.

3
Principles of Process Control

Identification Letters ANSI/ISA-S5.1-1994

FIRST-LETTER SUCCEEDING-LETTERS
MEASURED OR MODIFIER READOUT OUTPUT MODIFIER
INITIATING OR PASSIVE FUNCTION
VARIABLE FUNCTION
A Analysis Alarm
B Burner, User’s User’s User’s
Combustion Choice Choice Choice
C User’s Choice Control
D User’s Choice Differential
E Voltage Sensor (Primary
Element)
F Flow Rate Ratio (Fraction)
G User’s Choice Glass, Viewing
Device
H Hand High
I Current (Electrical) Indicate
J Power Scan
K Time, Time Time Rate of Control Station
Schedule Change
L Level Light Low
M User’s Choice Momentary Middle,
Intermediate
N User’s Choice User’s User’s User’s
Choice Choice Choice
O User’s Choice Orifice,
Restriction
P Pressure, Vacuum Point (Test)
Connection
Q Quantity Integrate,
Totalise
R Radiation Record
S Speed, Frequency Safety Switch
T Temperature Transmit
U Multivariable Multifunction Multifunction Multifunction
V Vibration, Valve, Damper,
Mechanical Louver
Analysis
W Weight, Force Well
X Unclassified X Axis Unclassified Unclassified Unclassified
Y Event, State or Y Axis Relay,
Presence Compute,
Convert
Z Position, Z Axis Driver,
Dimension Actuator,
Unclassified
Final Control
Element

4
Principles of Process Control

3 Physical Implementation

The physical implementation of monitoring and control will be surprisingly similar in terms
of the basic elements even though the control system may range from:

- Discrete monitoring and control with non-centralised control rooms


- Discrete monitoring and control with centralised control rooms
- SCADA Supervisory Control and Data Acquisition
- DCS Distributed Control System

The main elements will be:

Sensing Element
e.g. thermocouple, flow sensor

Transmitter
This converts the process measurement into a universal signal for transmission – typically 4-
20mA or a serial signal.

Receiver
This receives the signal transmitted from the field into a suitable format for the DCS or
SCADA system or may be directly inputted into a controller or monitor in the case of a
discrete system.

Control System
This is the heart of the system and the point at which the Man Machine Interface occurs. The
process data is displayed and where a control output is required, this is generated and the
reverse process to that described above initiated.

4 Automatic Control

4.1 Benefits of Automation

Some of the benefits are listed below, but equally it should be remembered that with
automation there is usually an increase in complexity. The need to keep things simple is
sometimes an overriding need in particular in countries where support and level of technical
expertise may be a problem.

4.1.2 More Reliable Operation

The installation of modern electronic equipment instead of electromechanical components


guarantees a higher reliability of the control system. Equipment downtime can be reduced
due to the availability of detailed process warnings.

4.1.3 Uniform Operation

The operator is released from all routine operations, checking and controlling. He is thus in a
position to fully concentrate on the optimum and efficient operation of the process. In this

5
Principles of Process Control

objective he is greatly supported by the system which presents all relevant information in a
logical and easily understandable way.

4.1.4 Energy Saving

A modern control system automatically starts and stops motors according to the process
requirements. Inefficient continuous running of motors and high energy losses during
unproductive start-up trials are eliminated. The control system can easily include the control
of the peak load to the plant (energy management). A better stabilised process can have a
very positive influence on thermal as well as on electrical energy consumption.

4.1.5 Manpower Saving

Achievable savings depend on actual labour situations, labour costs, labour policies (unions)
etc.

4.1.6 More Efficient Maintenance

The maintenance on control and instrumentation can be kept to a minimum due to the
installation of electronic equipment. Time-consuming troubleshooting can be reduced since
failures are displayed in clear text. Mechanical maintenance can be optimised and preventive
maintenance can be introduced due to the availability of detailed failure and warning
messages and statistical evaluation of all events.

4.1.7 Better Quality

The market demands for narrower tolerances in the cement quality, uniform operation, more
precise on-line measurements are required for better quality.

4.1.8 Protection of Environment

A modern automation system not only controls the process, it is responsible for continuous
environmental protection and monitoring.

6
Principles of Process Control

4.2 Closed Loop Control

Closed loop control is the process of comparing the Process Variable with the Set Point and
adjusting the output or Manipulated Variable. Thus in a closed loop the controller receives
feedback from the process, whereas in an open loop the output is simply sent to the
appropriate actuator.

Open Loop Actuator

Output
Set
Point

Actuator
Closed Loop
MV
SP

Process Measurement
PV

4.2.1 PID Control

PID control is a combination of three control elements:

P = Proportional
I = Integral
D = Derivative

In general it is most common to use all three elements within a typical control algorithm. The
settings for each of the elements will depend on the process to be controlled (i.e. fast
changing, slow changing, delayed response) and the stability and speed of response required.

It must be remembered however that before a process can be ‘controlled’ by a PID controller
it must be possible to control it ‘by hand’. In other words, if the process is very unstable or if
the process input or signal is very erratic then the task of controlling it with a PID controller
will be correspondingly difficult. The following sections describe the control elements of a
PID controller.

4.2.2 Proportional Control

The proportional mode alone is the simplest of the three. It is characterised by a continuous
linear relationship between the controller input and output. Several synonymous names in
common usage are proportional action, correspondence control, droop control, and
modulating control. The adjustable parameter of the proportional mode, Kc, is called the
proportional gain, or proportional sensitivity. It is frequently expressed in terms of percent
proportional band, PB, which is related to the proportional gain:

7
Principles of Process Control

Kc = 100/PB 4.2(4)

“Wide bands” (high percentage of PB) correspond to less “sensitive” controller settings, and
“narrow bands” (low percentages) correspond to more “sensitive” controller settings.

As the name “proportional” suggests, the correction generated by the proportional control
mode is proportional to the error. Figure 1 illustrates this by showing an assumed error curve
and the corresponding proportional correction, when the controller gain Kc is set to equal
about 2. Equation 4.2(5) describes the operation of the proportional controller:

m = (Kc)(e) + b = (100/PB)(e) + b 4.2(5)

where

m = the output signal to the manipulated variable (valve)


Kc = the proportional sensitivity or gain of the controller
e = the deviation from setpoint or error
PB = the proportional band (100/Kc)
b = the live zero or bias of the output, which in pneumatic systems is usually 3
PSIG (0.2 bars) and in electronic loops 4 mA

Fig. 1
The correction generated by the proportional controller is error multiplied by the gain of the
controller (eKc).

The proportional controller responds only to the present. It cannot consider the past history of
the error or the possible future consequences of an error trend, it simply responds the present
value of the error. It responds to all errors in the same manner, in proportion to them. When a
small error results in a large response, the gain (Kc) is said to be large or the proportional
band (PB) is said to be narrow. Inversely, when it takes a large error to cause a small
response, the controller is said to have a small gain or a wide proportional setting. In the
example given in Figure 1, the gain (Kc) is about 2, which corresponds to a proportional band
setting of about 50%. The gain in DCS control packages is usually adjustable from 0 to 8,
while in analog controllers it can usually be adjusted from 0.02 to about 25.

8
Principles of Process Control

Proportional Offset

The main limitation of plain proportional control is that it cannot keep the controlled variable
on setpoint. The proportional controller can only respond to a load change; it must move
away from its setpoint. The difference between the actual value and setpoint is called the
offset, because this is the amount by which the process is off setpoint.

It is evident that by increasing the gain, one can reduce the offset.

Unfortunately, most processes become unstable if their controller is provided with such high
gain. The only exceptions are the very slow processes. For this reason the use of plain
proportional control is limited to processes which can tolerate high controller gains (narrow
proportional bands) for example, regulators, float valves, thermostats, and humidostats. In
other processes, the offset inherent in proportional control cannot be tolerated.

4.2.3 Integral Control

This mode is also called reset mode, because after a load change it returns the controlled
variable to setpoint and eliminates the offset which the plain proportional controller would
leave. This mode has also been referred to as floating control, but it is most commonly called
integral (I) control mode. The mathematical expression of the integral-only controller is
1
Ti ∫
m= edt + b 4.2(7)

while the mathematical expression for a proportional-plus integral controller is

 1 
m = Kc e + ∫ edt + b 4.2(8)
 Ti 

The term “Ti” is the integral time setting of the controller. It is also called reset time and is
sometimes designated as R or I instead of the more common Ti.

The integral mode has been introduced in order to eliminate the offset which plain
proportional control cannot remove. The reason proportional control must result in an offset
is because it disregards the past history of error, that is, it disregards the mass or energy that
should have been but was not added to (or removed from) the process, and therefore, by
concerning itself with the present only, it leaves the accumulated effect of past errors
uncorrected. The integral mode, on the other hand, continuously looks at the total past history
of the error by continuously integrating the area under the error curve and eliminates the
offset by forcing the addition (or removal) of mass or energy, which should have been added
(or removed) in the past.

9
Principles of Process Control

Fig 2
The integral mode contribution to the output signal (m) is a function of the area under the
error curve.

Figure 2 illustrates the correction generated by the integral mode in response to the same
error curve as was used earlier. It also shows the proportional and the combined (PI)
correction. Note that when the error is constant and therefore the proportional correction is
also constant, the integral correction is still rising at a constant rate because the area under the
error curve is still rising. When the error and with it the proportional correction are both
dropping, the integral correction is still rising because the area under the error curve is still
rising. When the error reaches zero, the integral correction is at its maximum. It is this new
signal level going to the control valve which serves to eliminate the offset.

The units of setting the integral time are usually given in “repeats/minute” or in
“minutes/repeat”. The integral setting of control loops implemented in DCS systems can
usually be set from 0 to 300 repeats/minute, or from 0.2 seconds to about 60 minutes or more
in units of minutes/repeat. The meaning of the term “repeats/minute” (or its inverse) can be
understood by referring to Figure 2. Here, in the middle section of the error curve, the error is
constant and therefore the proportional correction is also constant (A). If the length of that
duration is one integral time (TI), the integral mode is going to repeat the proportional
correction by the end of the first integral time (B = 2A) and will keep repeating (adding “A”
amount of correction) after the passage of each integral time during which the error still
exists. The shorter the integral time, the more often the proportional correction is repeated
(the more repeats/minute), and thus the more effective is the integral contribution.

Pure integral control (floating control) is seldom used except on very noisy measurements as
in some valve position or flow control systems, where the PI loop is usually tuned with low
gain but lots of reset ( integral). The proportional mode acts as a noise amplifier, while the
integral mode integrates the area under the noisy error curve and gives a smooth average. PI
control is the most widely used control mode

10
Principles of Process Control

Fig.3
Response to a disturbance input with proportional, integral, and proportional plus integral
controllers.

configuration and is used in all except the easiest applications, such as thermostats, and the
most difficult applications, such as temperature or composition control in which large
inertia's and/or dead times require PID control. Figure 3 illustrates the response of P, I, and PI
controllers to a load change in the process.

4.2.4 Derivative Control Action

The proportional mode considers the present state of the process error, and the integral mode
looks at its past history, while the derivative mode anticipates its future state and acts on that
prediction. This third control mode became necessary as the size of processing equipment
increased and, correspondingly, the mass and the thermal inertia of such equipment. For such
large processes it is not good enough to respond to an error when it has already evolved,
because the flywheel effect (the inertia or momentum) of these large processes makes it very
difficult to stop or reverse a trend once it has evolved. The purpose of the derivative mode is
to predict process errors before they have evolved and take corrective action in advance of
that occurrence.

11
Principles of Process Control

Fig. 4
The derivative mode's contribution to the total output signal (m) is a function of the rate at
which the error is changing

Figure 4 describes the derivative response to the same error curve that has been used earlier.
In the middle portion of the illustration where the error is constant, the derivative contribution
to the output signal to the valve is also zero. This is because the derivative contribution,
shown below as Equation 4.2(9), is based on the rate at which the error is changing, and in
this region that rate is zero.

 d 
m = Kc e + Td e + b 4.2(9)
 dt 

In the left of Figure 4, where the error is rising, the derivative contribution is positive and
corresponds to the slops of the error curve. The unit of the derivative setting is the derivative
time (Td). This is the length of time by which the D-mode “looks into the future.” In other
words, if the derivative mode is set for a time Td, it will generate a corrective action
immediately when the error starts changing and the size of that correction will equal in size
the correction which the proportional mode would have generated Td time later. The longer
the Td setting, the further into the future the D-mode predicts and the larger is its corrective
contribution. When the slope of the error is positive (measurement is moving up relative to
the setpoint), the derivative contribution will also rise if the controller is direct acting.

On the right side of Figure 4 one can note that while the error is still positive (measurement is
above the setpoint), the derivative contribution is already negative, as it is anticipating the
future occurrence where the loop might overshoot in the negative direction and is correcting
for that. The derivative (or rate) setting is in units of time and usually can be adjusted from a
few seconds up to 10 hours or more. the applications of PD control loops are few. They
sometimes include the slave controller in temperature cascade systems, if the goal is to
increase the sensitivity of the slave loop beyond what the maximum gain can provide.
Another application of PD control is batch neutralisation, where the derivative mode protects
against overshooting the target (pH = 7) while the P-mode reopens the reagent valve for a

12
Principles of Process Control

droplet at a time as neutrality is approached. PID control is more widely used, and its
applications include most temperature and closed-loop composition control systems.

Limitations of the Derivative Mode

Because the derivative mode acts on the rate at which the error signal changes ,it can also
cause unnecessary upsets: it will react to a sudden setpoint change by the operator, it will
amplify noise, and it will cause upsets when the measurement signal changes in steps (as in a
chromatograph, for example). In such situations special precautions are recommended. For
example, in order to make sure that the derivative contribution to the output to the valve will
respond only to the rate at which the measurement changes but will disregard the rate at
which the operator changes the setpoint, the control equation needs to be changed. The
change is aimed at making the derivative act on the measurement (Equation 4.2[10]) and not
on the error (Equation 4.2[11]).

 1 d 
m = Kc e + ∫ e dt − Td c + b 4.2(10)
 T1 dt 

 1 d 
m = Kc e + ∫ e dt + Td e + b 4.2(11)
 T1 dt 

Some might prefer to eliminate the setpoint effect on the proportional contribution also. In
that case, Equation 4.2(10) would be revised as follows:

 1 d 
m = Kc - c + ∫ e dt - Td c + b 4.2(12)
 T1 dt 

Excessive noise and step changes in the measurement can be corrected by filtering out any
change in the measurement that occurs faster than the maximum speed of response of the
process (see the next section for details). DCS systems, as part of their software library, are
provided with adjustable filters on each process variable. The time constant of these filters is
usually adjustable from 0 to 100 seconds. In analog control systems, inverse derivative
modules are also often used.

13
Principles of Process Control

4.2.5 Adjustments of the PID Settings

The list below can assist one understanding what happens if one of the PID parameters, Kc,
Ti or Td is changed.

Adjustment Reaction
Increasing P action Increasing Kc § Speeding up control action: smaller
amplitude, smaller period of oscillation
§ Decreasing off-set
§ Increasing tendency to oscillate
Decreasing P action Decreasing Kc § Slowing down control action: bigger
amplitude, bigger period of oscillation
§ Increasing off-set
§ Decreasing tendency to oscillate
Increasing I action Decreasing Ti § Bigger amplitude
§ Smaller period of oscillation
§ Faster elimination of the off-set
§ Increasing tendency to oscillate
Decreasing I action Increasing Ti § Smaller amplitude
§ Bigger period of oscillation
§ Slower elimination of the off-set
§ Decreasing tendency to oscillate
Increasing D action Increasing Td § Smaller amplitude
§ Bigger period
§ First decreasing but beyond a certain
point increasing tendency to oscillate
Decreasing D action Decreasing Td § Bigger amplitude
§ Smaller period
§ First increasing, beyond a certain point
decreasing tendency to oscillate

4.4 Switching to Manual

For a PID controller the effect of switching to manual should be as follows provided that the
configuration is set correctly.

The controller output is disconnected from the internal control driver which would normally
act upon the Setpoint and Process Variable to derive an output signal (Manipulated Variable)
to the plant. The MV is taken directly from the operator input of either raise/lower buttons or
direct numeric entry.

Once in manual control mode the controller is usually configures to Setpoint Tracking mode
in which the internal Setpoint of the controller continuously tracks the Process Variable. In
this way there will be no process disturbance once the controller is returned to automatic
mode.

14
Principles of Process Control

4.5 Cascade Control

As implied by the name more than one level of control is utilised.

SP1 MV1 SP2 MV2


Controller Controller
1 2

PV1 PV2

In the case of cascade control we are attempting to control two interlinked process variables
PV1, and PV2. A typical example would be boiler output temperature (PV1,) which would be
cascaded i.e. the output (MV1) used to drive the setpoint (SP2) of a flow control valve for
fuel.

The flow controller for fuel would typically be fast acting and would adjust quickly to
variation in fuel pressure etc.

The temperature controller would be slow acting regulating the setpoint for the flow
controller according to the measured output temperature of the boiler (PV1).

In this example, the temperature controller (1) is the Master and the fuel flow controller (2) is
the Slave. The Slave controller acts as the first line of defence against disturbances,
preventing them from upsetting the primary process. In order for the Slave controller to be
effective it’s control loop is much faster than that of the Master. It would deal with variations
in fuel flow (PV2) before any appreciable effects on the boiler temperature.

5 Actuators & Drives

5.1 Methods of Regulating the Process

There are two basic methods of regulating the process:

♦ On/Off devices. These devices have only two states, typically on and off or open and
closed. Traditionally, these devices were switched using an electrical switch or relay.
Current practice for many years has been to drive these devices via digital outputs from
the process control system I/O (input/output) modules, typically 110Vac or 24Vdc.

♦ Continuous/modulating devices. These devices can have any value from fully off/closed
to fully on/open, typically scaled from 0 to 100%. Traditionally, pneumatic or electronic
single loop controllers drove modulating devices. Current practice is to drive these
devices via analogue outputs from the process control system, typically 4 to 20mA.

15
Principles of Process Control

There is emerging technology involving driving both on/off and modulating devices directly
from a network (fieldbus). This has the advantages of saving on cabling costs and improving
diagnostics. This has been successfully implemented with "intelligent" motor starters and
variable speed drives; however, the following two points should be considered:

• There is no industry standard at present (regardless of what the sales representative


claims), so the cost saved initially with the cabling is offset by the increased life
cycle cost of being tied to a proprietary brand.

• The risks associated with using technology that is not well proven must be fully
assessed.

5.2 Valves, Gates & Dampers

There are three technologies used for actuating valves, gates and dampers:

§ Pneumatic cylinders
§ Electric actuators
§ Hydraulics

Due to the cement plant environment, hydraulics tend not to be used for fixed plant. A quick
comparison of advantages between the other two technologies:

Pneumatic Cylinder - advantages Electric Actuators – advantages


Ø Fast response Ø Does not require high quality compressed
Ø Required if fail-safe mode is "fully air
closed" or "fully open" Ø For occasionally changing on/off
Ø More suitable for continuously changing applications - more energy efficient
modulating signals Ø Required if fail-safe mode is "stand still"

5.3 Motors

Over 80% of the I/O on a typical cement plant are dedicated to motor control. See the paper
"Electrical Engineering Aspects" for a discussion on motor starters.

Modern control systems allow the operator full access to diagnostic information regarding
drive status, from the control room.

5.4 Variable Speed Drives

Variable speed drives are also covered in the paper "Electrical Engineering Aspects". When
converting from damper control on a fan to variable speed drive, the following should be
considered:

♦ The speed of response is slower. This is not because the inverter is slow to respond, but
due to the fan inertia.
♦ For the same reason, the speed of response may be faster when speeding up, compared
with slowing down.

16
Principles of Process Control

♦ Both dampers and variable speed drives are non-linear in respect of air flow. However,
the curves in each case are very different.

6 Sensors

6.1 Traditional, Current Practice and Emerging Technology

In section 5.1, the words "traditional", "current practice" and "emerging technology" were
used when describing methods of regulating the process. It is worth defining these terms in
the context of this paper:

§ Traditional refers to the well-tried and proven way of doing something.

§ Current Practice means that this method / technology would typically be used when
building a new cement plant. If different to traditional, current practice should have
some benefit such as capital cost, reliability, accuracy, efficiency or maintainability. For
existing cement plants, it is not always justifiable to change from traditional to current
practice.

§ Emerging Technology means that the technology is not yet well proven for a specific
application (normally reliability is the last feature to be proven). A company the size of
Blue Circle Industries should be constantly striving for opportunities to use new
technology, which if "rolled out" gives us a competitive advantages. However,
"technology for the sake of technology" should be avoided, and a risk assessment should
be done before introducing new technology.

If you disagree with the classifications listed for specific applications, please contact the
authors with your constructive comments.

Similarly, if you have had a successful experience with new technology in the Process
Control field, please inform Blue Circle Technical Centre.

6.2 Temperature

Contact Temperature < 500 deg C – Current practice is to use Pt 100 RTD's.

Contact Temperature > 500 deg C – Current practice is to use thermocouples, with the types
rationalised across the cement plant.

Kiln Flame Temperature – Traditional is to use optical pyrometers plus water-cooled CCTV
cameras. Current practice is to use water-cooled CCTV cameras with digital processing to
give numeric values. Note that these temperature readings are used for indication only, not
for control.

Kiln Shell Monitoring – Current practice is to use scanning optical pyrometers with thermal
imaging such that the entire kiln shell surface temperature is monitored within one revolution
of the kiln.

17
Principles of Process Control

Secondary Air Measurement – Traditionally a difficult variable to measure, emerging


technology is to use sonic thermometry (speed of sound) across the kiln hood – as developed
at Cauldon.

6.3 Weighing Systems

A common application of weighing technology in a cement works is the weighfeeder. A


weighfeeder is used to meter the flow of bulk solids such as the raw meal feed to the kiln.
This is achieved by discharging material from a hopper or silo onto an endless belt conveyor
that is driven by a variable speed drive and is fitted with a load sensing device (loadcell). The
weight of the material on the belt is transferred to the loadcell by mounting the loadcell
beneath idler rolls or by suspending all or part of the weighfeeder frame on the loadcell. The
weight of material on the belt at a particular moment in time is known so the belt speed can
be varied in order to achieve a desired feedrate.

The traditional loadcell technology is the strain-gauge type that works by measuring the
change in resistance of a wire or foil filament as it is stretched under load. This has been a
reliable method of load measurement in use since the 1950s and is still in use today.

A more recently developed loadcell technology uses a wire vibrating at it’s resonant
frequency suspended between two points, one of which is deflected by the applied load. As
the load increases the tension on the wire increases and hence the frequency of oscillation
increases. This frequency is measured and converted into a weight signal that is accurate and
repeatable. This is one advantage that it has over the strain-gauge technology which tends to
suffer from drift and hence the need for more regular calibration.

All modern weigh systems can be supplied with an intelligent microprocessor–based control
system for the control of batching or blending.

A technology recently introduced into the cement industry for the measurement of bulk solids
mass flow is the coriolis flowmeter. In this device the material encounters a rotating
measurement wheel and is accelerated towards the outside of the meter along guide vanes.
On the measurement wheel the Coriolis force acts on the material as a result of it’s
acceleration in the circumferential direction. This force is measured and is directly
proportional to the material feed rate. This method is suitable for fine materials and is
becoming used for the feed of pulverised coal to the kiln.

6.4 Level

Silo level – Traditionally, a silo pilot or dip-on-demand system has been used. Current
practice is to use ultrasonic level transmitters. However, ultrasonic transducers have problems
measuring the level in silo's containing hot clinker/cement/raw meal. For these applications,
modern microwave level transmitters have been proven to work the best.

Silo / Hopper Weighing – For smaller silo's and hoppers, the most reliable method of
measuring the level of material is to support or suspend the container on load cells.

Preheater Cyclone Blockage Detectors – Current practice is to use nuclear level transducers.

18
Principles of Process Control

Tanks containing fluids – The traditional (and current practice) is to use differential pressure.

Cooler bed depth – There is emerging technology using microwave transducers.

6.5 Electrical Values

Primary Measuring Elements – Traditional and current practice is to use CT's (Current
Transformers) and PT's (Potential Transformers). For critical metering points, the metering
CT's should be separate from the protection CT's (higher accuracy but lower saturation level).

Metering Equipment – Current practice is to use digital (programmable) meters with pulsed
outputs for critical metering points. Emerging technology is to use network communications.

7 Analysers

Analysers measure concentrations, implying that the result must be a ratio between the
determinant and the diluent, e.g. mg/m3, or % by volume. This paper only covers on-line
analysers that measure variables in the process. Laboratory analysers are not discussed.

7.1 Qualitative vs Quantitative

Before specifying an analyser, the expectations of the analyser's users must be clearly
understood. Does the analyser need to be "rough and ready", giving a signal approximately
proportional to the concentration, for trending or closed loop control? Or does the analyser
need to have a high degree of absolute accuracy, e.g. for emission reporting or heat-rate
balance? The answer to these questions can have a large impact on the technology selected,
system costs, calibration and maintenance, etc.

7.2 Analyser Reliability

A modern, sophisticated analyser typically has an availability/uptime of 95% (measured


across various industries). Whereas this figure may be adequate for cross checking, indication
and trending, it is too low for effective closed loop control. For comparison, a typical contact
temperature transducer (e.g. a thermocouple) in a gas stream should has an
availability/uptime of >99.9%.

To address this poor reliability, the following issues need to be considered:

§ Sampling System Design – If the analyser is "extractive" (solids or gases), the sampling
system will cause most of the system failures. Where feasible, "in-situ" analysers are
recommended. Unfortunately, there are no in-situ analysers that can handle the
environment at the back end of a kiln – yet!

§ Analyser Environment – The second highest cause of system failures are linked to the
environment where the analyser is located. Most analysers work well in a laboratory or
control room environment, but cannot survive the "elements" found in a typical cement
plant. The analyser should be protected against dust, temperature extremes, corrosive
gases and vibration. The analyser room should also have adequate space and light for

19
Principles of Process Control

maintenance and calibration. The analyser is obviously also dependant on the electrical
supply and (usually) a good quality compressed air supply.

§ Specifications – It is worthwhile to check the range of concentrations expected before


ordering an analyser, either through manual sampling and analysis or by using a
temporary on-line analyser. Depending on the technology, readings <5% of the analyser's
span could be meaningless. It is also important to tell the equipment supplier what other
constituents are found in the sample stream – inaccuracies are often caused by cross-
interference.

§ Ownership – Process Control, Process, Quality Control, Mechanical, Environmental and


Management all have an "involvement" in analysers – it is sometimes difficult to
establish who is actually responsible for the purchase, maintenance and calibration of an
analyser. Responsibilities should be clearly defined, and training given where necessary.

8 Future Trends in Process Control

To predict what the future holds, we need to learn from the past. Sixty years ago, there was
very little automatic control in cement plants – Process Control Engineering is a relatively
new discipline. If cement plants could run in the past without sophisticated instrumentation
and programmable controllers, why bother at all?

Process control engineering has seven critical functions in a modern cement plant:

• Ensuring compliance with safety, health and environmental standards,


• Stabilising plant operation,
• Making the operator's job easier,
• Assisting with process optimisation
• Making quality control easier
• Making the manager's job easier
• Making the plant maintenance easier

8.1 Ensuring Compliance with Safety, Health & Environmental Standards

By:
• Interlocks
• Flammable gas detectors
• CEM's (Continuous Emission Monitors)

The way forward:


• Equip plants with reliable and maintainable safety devices
• Pro-active compliance with legal requirements

20
Principles of Process Control

8.2 Stabilising Plant Operation

By:
• Automatically maintaining the required process variables in a steady state
• Automatically responding to process disturbances

The way forward:


• Better understanding of the process models
• Increased use more sophisticated control algorithms

8.3 Making the Operator's Job Easier

By:
• Centralised control
• Reliable closed loop control and higher level controllers
• Start up and shut down sequence control

The way forward:


• User-friendly operators interfaces
• Alarm prioritisation

8.4 Assisting with Process Optimisation

Modern process control technology can automatically protect equipment and, once properly
tuned, keep the plant running efficiently with the minimum of human intervention. However,
a process control system cannot generate new ideas to de-bottleneck a plant – only people can
do this.

Where modern process control system assist with process changes:


• Flexible process control systems which are easy to modify
• Additional sensors

The way forward:


• Measuring the "unmeasurable" – reliably
• In-plant expertise to add control loops

8.5 Making Quality Control Easier

By:
• Auto-samplers and on-line analysers
• Real-time process data

The way forward:


• Making on-line solids analysers reliable and low maintenance
• Transparency between the control room and the laboratory

21
Principles of Process Control

8.6 Making the Manager's Job Easier

By:
• Presenting believable information
• Automatic data transfer

The way forward:


• Any information system connecting to an ODBC database, safely and securely
• Flexibility to add and remove variables in the database

8.7 Make the Plant Maintenance Easier

By:
• The selection of reliable and maintainable sensors and process control systems
• Diagnostics tools to permit predictive maintenance

The way forward:

• "Design-out-maintenance"
• Connect maintenance to ODBC process database, allowing planned shutdown work

22
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

HBM PROCESS ENGINEERS


CONFERENCE

• Neural Net Control Systems


Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

HBM PROCESS ENGINEERS


CONFERENCE

• Neural Net Control Systems


Day3-4

HBM PROCESS CONFERENCE

NEUU
U
RAL NET CONTROL SYSTEMS
NEURAL NET CONTROL SYSTEMS

Introduction

The concept of Neural Networks has been around for some considerable time although it has
only been developed more recently with the advent of ever more powerful computers.

This paper arises from a planned project at Bowmanville, Canada to use neural nets to
optimise the output of the Atox raw mill. It is intended to use some software called Process
InsightsTM developed by Pavilion Technologies, Inc. (based in Austin Texas) and to integrate
this with the Foxboro Distributed Control System (DCS) at Bowmanville. Foxboro have
already worked with Pavilion Technologies on similar projects elsewhere.

The project is at a very early stage so this paper will concentrate on a general overview of
neural networks and how they can be used. Having said that, some of the comments will
refer specifically to the Process InsightsTM software.

Uses for Neural Nets

Neural nets are not new technology. The original research was carried out by McCollock &
Pitts in 1942, followed by Hebb in the early 1950s and Rosenblatt & Widrow in 1954. More
recently, it was revolutionised in the 1980s by Rummelhart who introduced the concept of
back-propagation. The availability of computers by this time played a significant part in the
development of neural nets.

Neural nets are used in a range of different applications. Some of the more common uses
include :
Analysing business trends.
Modelling and forecasting, particularly in the financial sector.
Detecting fraud e.g. credit card fraud
Valuing properties
Process optimisation, traditionally in the oil and petrochemicals industries.
Process control.

It is a technology still in the early stages of development and this applies to the process
control aspect in particular.

What Are Neural Nets

There is no single definition of a neural net but one idea from the literature is that it could be
. . . a form of artificial intelligence used to solve problems that are too complex,
laborious or fuzzy to solve by conventional methods.

The term “‘artificial intelligence” is frequently used when referring to neural nets. However,
it should be remembered that computer systems, whether they be neural nets, expert systems
or desktop PCs, simply carry out a sequence of programmed commands. These commands
may be embedded in the operating system, application software or user program. Computers
do not have the ability to think for themselves and will be consistent in the output they give
for a given set of inputs.
More specifically a neural network can be said to be
. . . a group of interconnecting computer nodes modelled on the structure of the
brain.
It is also true to say that a neural network is
. . . an empirical model, based on mathematical functions, which approximates to
a known process.

The structure of a neural network can take a number of different forms but the most common
is the Multi-layer Network - this is the classic neural net.

Neural Network Structure


SYNA/PSES have WEIGHT values
BIAS value

Input * Layer
Layer

The network nodes are arranged in groups called layers. There are three layers an input
layer, a hidden layer and an output layer. The hidden layer is comprised of artificial neurons
and these, together with the output layer nodes, are the computationally active parts of the
network. The nodes on the input layer are simply points through which the input data is
distributed to the nodes in the hidden layer.

Each node in the input layer is connected to every neuron in the hidden layer by a synapse
connection - these synapses have weight values. Each neuron has a bias value and, in turn
is connected to every node in the output layer.

The detail of what happens at a node in the hidden layer (the artificial neuron) is summarised
in the diagram below.
For a given node the input values (ii, i2 . . . in) are multiplied by the weight on their respective
connections (wi, w;! . . wn) and all these products (e.g. ii x wi) are summed together at the
input to the neuron. The bias value for the neuron is then added in. The result of this
calculation, y is further processed by a transfer function and this forms the output from the
neuron.
What Happens in an Artificial Neuron

. I*. wz+ ‘SW,


u =1*w1+ . + . . . ..*... . . . . +i,,w,,+b
Y = f(u)

A typical transfer function used is shown below Initially, as the input to a hidden node is
small and increasing the output hardly responds. As the value of the input rises further the
output suddenly responds but eventually saturates as the input continues to rise.

Response of a Transfer Function

i::

-20 -15 -10 -5 0 5 10 15 20


U

This calculation process happens at every hidden layer node and output layer node as the data
is passed through the network.
Advantages of Using Neural Nets
The main advantage of using neural nets is that they have the ability to be trained rather than
following a set of pre-defined rules supplied by the user. In order to build a model with pre-
defined rules it is necessary to know the relationships between input and output variables in
advance. Many attempts to build mathematical models of a process have failed because the
process is not understood in sufficient detail. Neural networks get around this problem by
generating the relationships themselves in an empirical manner. Having said this it should be
remembered that once a neural network has been trained it is also effectively a set of pre
defined rules.

Neural networks can handle complex multi-variable relationships, non-linear problems and
noisy data. It is also relatively easy to re-train them if conditions or operating regimes
change.

Data Requirements

The first thing to consider before training a neural net is the quantity and quality of the data.

It is necessary to have a reasonably large amount of data in order to train a neural net
properly. It is difficult to give any recommendation for this without knowing the specific
process i n detail However, common sense would suggest that, for a kiln or raw mill, two
days worth of data would be insufficient whereas two months would probably be adequate.
This also depends on the frequency of data sampling which should ideally be at least two
times the highest frequency present in the data. However, practical considerations may
ultimately determine the frequency of sampling.
Neural nets will only work if there is sufficient relevant data so it is better at first to
incorporate everything available, including environmental variables such as ambient
temperature, humidity etc. Variables can always be removed at a later time if necessary.

It should also be noted that all relevant data must be included. A model will not give good
predictions if a variable that is strongly correlated is omitted from the data set.
Data should cover the entire range of plant operations that is to be modelled and have
adequate variability or movement in each input variable. All concepts affecting a process
should be captured within the data ser. (A concept is a fundamental principle of a process
and may be a physical property or law, for example)

There are essentially two sources of data - experimental and historical. Experimental data is
obtained by running one or more controlled experiments that move from one operating region
to another. The advantages of this type of data are :
m
typically the quality of data collected is high.
it ensures that all relevant operating modes are covered.

It has the disadvantage of being expensive and disruptive to obtain It also requires pre-
planning for which some expertise in statistical methods is required.
Historical data, obtained from a DCS or Historian, is usually readily available in large
quantities and does not cost anything to produce as there is no disruption to the process. The
disadvantages of this type of data are :
the quality of data may not be so high. Many systems will compress the data
over a period of time and definition will be lost.
- it may not cover the entire operating region
* there may be insuffrcient variability in some of the variables
there may be missing concepts

Data Pre-processing

The basic data set must be processed before being used to train the model. The use of data
visualisation tools, such as time series plots, correlation plots, probability or data distribution
plots etc., can help to identify variables which should be included or excluded. They can also
help to identitj outliers and clusters. These can be caused by calibration spikes, process
downtime or equipment failure. Outliers and clusters should normally be removed from
the data set.
Pre-processing can also be used to build process time delays into the data set. Multiple
passes may be necessary before the data set is finalised.

Neural Network Training Process

A data set may include several thousand sets of data that may have been recorded at 1 minute,
5 minute or 1 hour intervals for example. Prior to starting training the data must be split up
into sections - one of these sections will be the training set and one will be a test set. It is
also usual to specify a validation set, which is an independent set of data not associated with
training or testing, and this is normally the last 5% of the data.

The training set is used by the training algorithm to calculate the weights for the neural
network. The test set is then used to test the performance of the new weights. The new
weights must reduce the error in both the training set and test set to become the best weights.

The training process is shown schematically below.

Each set of data is fed into the neural network in turn and a set of weights calculated. The
output from the network is the predicted value and this is compared with the true data value
for the output variable. The error between the two is fed back into the neural network and is
used to modify the weights. (This is where the term back-propagation comes from). The next
set of data is taken and the process repeated.
Neural Network Training Process

Compare predicted output


with known output
Neural Network

When all sets of data have been processed it will return to the first set of data again and
continue to try to optimise the set of weights. This will continue in an iterative manner until
the end point is reached. The end point may be defined in a number of ways but essentially
will be when the errors for both the training and testing sets approach their asymptotic values.
This can be determined by visual inspection or by a mathematical goal such as the minimum
value of the error function :
N

C(
I;, - Yi)2
E = i=l

2xN

Model Evaluation

Once the model has finished raining the important questions are:

l Does the model produce good predictions ?


l Did the model learn the correct relationships between input and output variables ?
l Will the model perform well if placed on line ?

There are various ways of evaluating models, such as standard deviation and R2 (degree of
fit), but one of the simplest is to plot predicted value against actual value. This gives a visual
indication not only of the degree of fit but of any outliers remaining in the data set.

The outliers should be investigated and removed if they are ‘bad’ data points. If they are not
bad data points then investigate the following :
l Model is not fully trained
l Data distribution is skewed
l Model is missing crucial input information
l Incorrect time delays

In any case it will be necessary to return to the pre-processing stage and then re-train the
model.

On-line Models

It must be emphasised that the process described above will produce a Steady State Model.
This can be used off-line for identification of bottlenecks, process setpoint optimisation,
‘What-if simulations etc.

It is a fairly simple procedure to create an on-line model from the off-line version. However
it must be remembered that it is still a steady state model and can only be used to optimise
steady state operation on-line. This can be particularly useful if there are regular changes of
target for the output variable e.g. change of product quality.

There are some other advantages of using the model on-line and these include sensor
validation and alarm generation (although this would normally be done by the DCS).

Sensor validation can not only detect when a sensor has failed but it can reconstruct the value
that the sensor should have, thus preventing immediate failure of the model. The best way to
achieve this is to have a certain amount of sensor redundancy i.e. more than one sensor
measuring a given parameter.

Summary

Neural networks can provide a means to model complex relationships and thus enable better
understanding and optimisation of the process. They model steady state processes and do not
handle process dynamics very well.
NEURAL NETWORK
CONTROL SYSTE

Neural Network Control Systems

l Uses for Neural Networks


l What are Neural Networks
l Why use them
l How to use them
l Advantages / Disadvantages
Uses for Neural Networks
l Analysing business trends.
Modelling and Forecasting
l Detecting fraud (patterns)
l Process optimisation
l Process control

What are Neural Networks ?


l A form of Artifkial Intelligence used to
solve problems that are too complex,
laborious or fuzzy for conventional methods
l A group of interconnecting computing
nodes, modelled on the structure of the
brain
l An empirical model, based on mathematical
functions, which approximates to a known
process
Neural Network Structure
SYNAPSES have WEIGHT values
/
ve BIAS value

Input Layer
Layer

What Happens in an Artificial Neuron

Weights Bias
Inputs Transfer
il y%
b function
.
‘2 output
.
‘3 Y
.
‘4

in ‘wn
. . .
u = llWl + 1 2 W 2 + 1 3 w 3 + . . . . . . . . . 00.. +i,w,+b

Y= f(u)
Response of a Transfer Function

-20 -15 -10 -5 0 5 10 15 20


U

Whv Use Neural Networks ?


l Ability to be Trained rather than
following pre-defined rules
l Do not need in-depth process knowledge
l Can handle
- Complex multi-variable relationships
- Non-linear problems
- Noisy data
l Relatively easy to re-train if conditions
change
Data Requirements
l Need a reasonably large amount of
data
l Will only work if there is sufficient
relevant data
l All relevant data must be included

Experimental Data
l Advantages
- Typically high quality data
- Ensures all operating modes are covered
l Disadvantages
- Expensive and disruptive to obtain
- Requires preplanning
- Some expertise in statistical methods
required
Historical Data
-0 Advantages
- Usually readily available
- Cheap
- Lots of it
l Disadvantages
- Quality of data (compression)
- May not cover entire operating region
- Insufficient variability
- May be missing concepts

Data Pre-processing
l Data Visualisation
- time series plots
- correlation plots
- probability / normal distribution plots
l Identify Outliers and Clusters
- calibration spikes
- process downtime
- equipment failure / erroneous values
Neural Network Training Process

Compare predicted output


with known output
Neural Network

Building a Better Model


Applications
l Offline
- Identification of Bottlenecks
- Process setpoint optimisation
- ' What if ' simulation
l Online
- Steady-state optimisation
- Sensor validation
- Open or closed loop supervisory
control

Disadvantages of Neural Networks


l Often not valid outside the range
of the original data
l Need to retrain for new operating
conditions
l Data must cover a wide range of
operating conditions
l Do not handle long process
dynamics well
The Way Ahead

l Purchase software
l Build Offline model
l Evaluate model
l Build On-line model
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

PRESENTATIONS

PROCESS CONTROL –
JOE STRATTON

KILN CONTROL SYSTEMS


Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

PRESENTATION

Process Control
- Joe Stratton
Process Control
Presentation

Process Engineering Training


Blue Circle Cement
Why Process Control?
• Increase Throughput
• Maintain Quality and Consistency
• Reduce Costs
• Protection and Safety
How Do We Control The
Process?
• Accurate and Representative Field Signals
• Process this Information so it can be used
• Adjust Field Elements to get the process
inline

• Log this Information


Pressure and Temperature
Measurement
• Primary Elements
• Wells
• Location
• Transmitters
Primary Elements and Wells
Well used Thermocouples
Pressure Port
Measurement Location
Pressure and Temperature
Transmitters
Weighing of Materials
• Mechanical Weigh Feeders
• Solid-State Weigh Feeders
• Impact Flow Meters
• Pfister Feeders
• Weigh Bins
Mechanical Weigh Feeders
Solid-state Weigh Feeder
Pfister Feeder
Pfister Feeder Operation
Impact Flow Meter
Weigh Bins
Flow Measurement
Other Process Measurements
• Density • Gas Analysis
• Sound • Opacity
• X-ray Analysis • Lab Data
• Infrared • Friction
• Current/Load • Conductivity
• Vibration
• Level
Don’t always believe the
salesman
• Conductivity meter
• Ranged 0-50 mS
• Alarm Point 7.5 mS
• Factory calibrated and checked
using procedure in manual
• Check with solution made in
lab and found a large
discrepancy between a
simulated calibration and
absolute calibration 1-4%
• They’re still looking into it!
Transmitting and Processing the
Signal
• Analog Signals: 4-20, 10-50 ma, 0-10,
-10- + 10 Vdc, Travel on paired wires
• Digital Signals: Travel on twisted paired
wires, Fiber Optics, Coaxial cable
• Received by: Individual controllers,
Distributive Control Systems “DCS”,
Programmable Logical Controllers “PLC”,
Panel Meters, Trend Recorders
Processing the Signal and
Controlling
How Things Have Change in 30 Years
Changing The Process
• Moving a damper
• Moving a valve
• Changing the speed of a motor
• Open/close
• On/off
• Chemistry
Relays to PLC’s
Motor Control
Blue Circle Cement

PROCESS ENGINEERING TRAINING


PROGRAM

PRESENTATION

Kiln Control Systems


Golden Bay

Kiln Control Systems

Golden Bay CTC March 1998

1
Traditional Kiln Control Strategy Golden Bay

● Control of primary parameters and assessment


of operating condition is responsibility of kiln
operator
● Few loops for secondary parameters are often
implemented
– e.g.cooler: volume control, fan speed control
● Historical reasons for this are:
– Non-linear relations
– Multivariable input
– Poor quality of process signals
2
Benefits of Kiln Control System Golden Bay

● Kiln control system = auto-pilot


– routine tasks can be handled by a computer
– operators can concentrate on “upset” condition
– process analysis and new control strategies can
be implemented
● Tool for process analysis
● Enables the application of visualisation
packages incl. equipment monitoring
● Information management

3
Benefits of Kiln Control System Golden Bay

● Control and Optimise operation


– Improve kiln output
– Reduce energy consumption
– Increase refractory life
– Improve quality
● clinker properties e.g. impact on grindability and
strength development
● lower standard deviation on variables

– Comply with environmental regulations, e.g.


SOx/NOx or CO emissions
4
Requirements for Control Systems Golden Bay

● Adequate instrumentation
● Reliable and stable “entry” parameters e.g. raw
mix chemistry, optimised combustion
● Continuous maintenance on instrumentation
and control system
● Trained personnel to do the fine tuning and
necessary modifications if system parameters
change

5
Choosing a Control System Golden Bay

● Distributed Control System (DCS)


– Supplied by a major company
– Usually well supported by company personnel
– Costly, but one is buying the specialised
knowledge of automation system manufacturer
● Programmable Logic Control (PLC) or
Supervisory Control and Data Acquisition
(SCDAS)
– Supplier mainly focuses on hardware control
– System configuration developed through
software house - higher risk
6
Choosing a control system Golden Bay

7
Development of
Golden Bay
High Level Control

● Traditional approach to control strategies:


– modelling based on mathematical / empirical
relations
– statistical evaluation of process parameters
● Kiln operation was simulated successfully, but
practical implementation was limited due to
– too many assumptions necessary
– developed theories became to complex / side-
tracked

8
Development of
Golden Bay
High Level Control

● Review of operator behaviour / human


decision making process
● Summarise the individual decisions into rule
blocks
– initiate multivariable control action
– combine information from variables
– identify kiln conditions
– generate suitable corrective actions
● Providing SUPERVISORY & OPTIMISING
Control
9
Interface to Process control systemGolden Bay

● Input: Read data from the process


– e.g. temperature, pressure, gas composition,
amps
● Output: Write data to the process
– e.g. feed rate, kiln speed, coal rate, draft
● Auto pilot: Operator sets the overall
strategy
– e.g. setpoints (feed/fuel rate), targets
(emissions), laboratory data (clinker factor,
LSF)
10
Centralised Control Room
Golden Bay
Golden Bay

11
Centralised Control Golden Bay

12
Golden Bay

13
Golden Bay

14
Golden Bay

15
Golden Bay

16
Golden Bay

17
Changes for the process operator Golden Bay

● Currently all information are on hand at a


“glance”
● Signals are displayed in charts or bar chart
type displays
● Visualisation could be improved with better
graphic software
● Specific information can be “pulled out” via
software
● New data presentation NEEDS to be discussed
with all parties involved

18

S-ar putea să vă placă și