Sunteți pe pagina 1din 210

Process Control and Common Terms For

Practinioners
Common terms in the process control terminology
Analog Signal
Analog signals are like voltage or electric current signal, representing temperature,
pressure, level etc. Usually the electrical current signal is of magnitude 4-20 mA
where 4 mA is the minimum point of span and 20 mA is the maximum point of
span.

Analog to Digital Converting, A-D Converting


Electronic hardware converts analog signal like voltage, electric current, temperature, or
pressure into digital data a computer can process and interpret.

Auto Mode
In auto mode the output is calculated by the controller using the error signal - the
difference between set point and the process variable.

Closed Loop
Controller in automatic mode.

Cascade
Two or more controllers working together. The output of the master controller is the
set point for the "slave" controller.

Controller Output - CO
Output signal from the controller.

DDE Windows Dynamic Data Exchange


A standard Microsoft operating system method for communicating between applications.
Replaced by OLE for process control - OPC.

Dead Band
The range through which an input can be varied without initiating a response.

Dead Time
Dead time is the amount of time it takes for the process variable to start changing after
changing output as a control valve, variable frequency drive etc.

Derivative - D
The derivative - D - part of a PID controller. With derivative action the controller output is
proportional to the rate of change of the process variable or process error.

Delay
A term commonly used in stead of dead time.

Deviation
Any departure from a desired or expected process value.

Digital Signal
A discrete value at which an action is performed. A digital signal is a binary signal with
two distinct states - 1 or 0, often used as an on - off indication.

Digital Control System - DCS


Digital Control System - DCS refers to larger digital control systems.

Discrete Logic
Refers to digital "on - off" logic.

Discrete I/O
On or off signals sent or received to the field.

Distributed Control System - DCS


A control system where the controller elements are not central in location but
distributed throughout the system with each component sub-system controlled by
one or more controllers.

Dominant Lag Process


Most processes consist of both dead time and lag. If the lag time is larger than the
dead time, the process is a dominant lag process. Most process plant loops are
dominant lag types. This includes most temperature, level, flow and pressure loops.

Error
In the control loop the error = set point - process value.

Gain
Gain = 100 / Proportional Band. More gain in the controller gives a faster loop response
and a more oscillatory (unstable) process.
Gain in the process is defined as the change in input divided by the change in
output. A process with high gain will react more to the controller output changing.

Gain Margin
The difference in the logarithms of the amplitude ratios at the frequency where the
combined phase angle is 180 degrees lag is the gain margin.

Hysteresis
The signal change before the output unit (valve or similar) moves.

Input/Output - I/O
Electronic hardware where the field devices are wired.

Integral Action - I

The integral part of the PID controller. With integral action, the controller output is
proportional to the amount and duration of the error signal. If there is more integral
action, the controller output will change more when error is present.

Load Upset
An upset to the process not from changing the set-point (process disturbance).

Lag Time
Lag time is the amount of time after the dead time that the process variable takes
to move 63.3% of its final value after a step change in valve position.

Measurement
Measurement is the same as the process value.

Manual Mode
In manual mode the output is set manual.

Mode
The controller can be set in auto, manual, or remote mode.

Man Machine Interface - MMI


Refers to the software that the process operator operates the process with.

Output
Output of the controller.

Overshoot
The amount a process exceed the set point during a change in the system load or
change in the set point.

PID Controller
Controller including Proportional, Integrating and Derivative controller functions. Cfr.
ANSI/IEE Standard 100-1977.

Process Value - PV
The actual value in the control loop, temperature, pressure, flow, composition, pH, etc

Programmable Logic Controller - PLC


Controllers replacing relay logic, usually with PID controllers.

Process Variable - PV
The actual value in the control loop, temperature, pressure, flow, composition, pH, etc.
See Process Value.

Proportional Band - P
With proportional band the controller output is proportional to the error or a
change in process variable. Proportional Band = 100/Gain

Rate
Same as the derivative or "D" part of PID controllers.

Register
A data storage location in a PLC.

Regulator
A controller changing the a output variable to move the process variable back to the set
point

Repeatability
The variation in outputs for the same change of input.

Reset
Same as the integral or "I" part of PID controllers.

Reset Windup
Integral action continuing to change the controller output value after the actual
output reaches a physical limit.

Response Time
The rate of interrogating a transmitter.

Sample Interval
The rate at which a controller samples the process variable and calculates a new
output.

Set Point
The set point is the desired value of the process variable.

Time Constant
Same as lag time.

Transmitter
A transmitter sense the actual value of a system and transforms the value to
a standardized signal - 4-20 mA is common for analog signals - as input for the control
system.

******************************************************
Glossary of Process Control Terms
By John Gerry, P.E., and George Buckbee, P.E., ExperTune Inc.

"A to D" or A/D Converter: A to D means Analog to Digital. This


electronic hardware converts an analog signal like voltage, electric

current, temperature, or pressure into a digital number that a computer


can process and interpret.
Active Model Capture: This technology involves the automated capture
of process models from naturally-occurring process data. For example,
when the operator makes a setpoint change, a process model can be
developed. For more on active model capture, click here
Auto Mode: In auto mode the controller calculates the output based its
calculation using the error signal (difference between setpoint and PV).
See Mode.
Anti-Reset Windup: Same as reset windup.
Bump Test:To determine a process model, there needs to be some
"excitation" of the process. This is typically accomplished through bump
testing. Bump tests can be performed many ways. Some ways to do this
include:

Make a Setpoint Change

WIth the loop in MANUAL, change the controller output

Perform a Fast Plant Test

Closed Loop: Controller in automatic mode. See Mode.


Cascade: With 2 or more controllers. The output of the "Master" controller
is the setpoint for the "Slave" controller. A classic example is the control of
a reactor (a large vessel with a steel jacket around it). The product
temperature (master) controller's output is the setpoint of the jacket
temperature (slave) controller.
Composition: A process variable. Represents the amount of one material
in a solution, or gas.
CO or Controller Output: Same as output.
Corner Frequency: For first order time constants, the "corner frequency"
is the frequency where the amplitude ratio starts to turn and the phase lag
equals 45 degrees. Also:
corner frequency = 1/(time constant) radians/time
DDE Windows Dynamic Data Exchange. A standard software method for
communicating between applications under Microsoft Windows. Created
by Microsoft starting with Windows 3.1. DDE is being replaced by OLE for
process control, OPC.
Dead Time: Dead time is the amount of time that it takes for your
process variable to start changing after your valve changes. If you were
taking a shower, the dead time is the amount of time it would take for you

(the controller) to feel a change in temperature after you have adjusted


the hot or cold water.
Pure dead time processes are usually found in plug flow or solids
transportation loops. Examples are paper machine and conveyor belt
loops. Dead time is also called delay. A controller cannot make the process
variable respond before the process dead time.
To a controller, a process may appear to have more dead time than what it
actually has. That is, the controller cannot be tuned tight enough (without
going unstable) to make the process variable respond appreciably before
an equivalent dead time. More accurately, the characteristic time of the
loop is determined by equivalent dead time. Equivalent dead time consists
of pure dead time plus process components contributing more than 180
degrees of phase lag.
The phase of dead time increases proportionally with frequency. Any
process having more than 180 degrees phase lag has equivalent dead
time.
Derivative: The "D" part of PID controllers. With derivative action, the
controller output is proportional to the rate of change of the process
variable or error. Some manufacturers use the term rate or pre-act instead
of derivative. Derivative, rate, and pre-act are the same thing. Derivative
action can compensate for a changing process variable. Derivative is the
"icing on the cake" in PID control, and most people don't use it. It can
make the controller output jittery on a noisy loop and most people don't
use derivative on noisy loops for this reason. See presentation
on Derivative Action, the Good, the Bad, and the Ugly.
Delay: This term is often used in place of dead time. See dead time.
DCS: Digital Control System. DCS refers to larger analog control systems
like Fisher, Foxboro, Honeywell, and Bailey systems. DCSs were
traditionally used for PID control in the process industries, whereas PLCs
were used for discrete or logic processing. However, PLCs are gaining
capability and acceptance in doing PID control. Most utilities, refineries
and larger chemical plants use DCSs. These systems cost from twenty
thousand to millions of dollars.
Discrete Logic: Refers to digital or "on or off" logic. For example, if the
car door is open and the key is in the ignition, then the bell rings.
Discrete I/O: Senses or sends either "on or off" signals to the field. For
example a discrete input would sense the position of a switch. A discrete
output would turn on a pump or light.
Dominant Dead Time Process: If the dead time is larger than the lag
time the process is a dominant dead time process.
Dominant Lag Process: Most processes consist of both dead time and
lag. If the lag time is larger than the dead time, the process is a dominant

lag process. Most process plant loops are dominant lag types. This
includes most temperature, level, flow and pressure loops.
Error: Error = setpoint - PV. In auto mode, the controller uses the error in
its calculation to find the output that will get you to the setpoint.
Equivalent Dead Time: To a controller, a process may appear to have
more dead time than what it actually has. That is, the controller cannot be
tuned tight enough (without going unstable) to make the process variable
respond appreciably before anequivalent dead time. More accurately,
the characteristic time of the loop is determined by equivalent dead time
consisting of pure dead time plus process components contributing more
than 180 degrees of phase lag.
The phase of dead time increases proportionally with frequency. Any
process having more than 180 degrees phase lag has equivalent dead
time.
Fast Plant Test: A process test designed to quickly gather process model
information from slow processes. This method works well with slow loops
such as temperatures, compositions, and some tank levels.
Gain (of the controller): This is another way of expressing the "P" part
of the PID controller. GAIN = 100/(Proportional Band). The more gain a
controller has the faster the loop response and more oscillatory the
process.
Gain (of the process): Gain is defined as the change in input divided by
the change in output. A process with high gain will react more to the
controller output changing. For example, picture yourself taking a shower.
You are the controller. If you turned the hot water valve up by half a turn
and the temperature changed by 10 degrees this would be a higher gain
process than if the temperature changed only 3 degrees.
Gain Margin: The difference in the logarithms of the amplitude ratios at
the frequency where the combined phase angle is 180 degrees lag is the
GAIN MARGIN.
Hysteresis: In a valve with loose linkages, the air signal to the valve will
have to change by an amount equal to the hysteresis before the valve
stem will move. Once the valve has begun to move in one direction it will
continue to move if the air signal keeps moving in the same direction.
When the air signal reverses direction, the valve will not move until the air
signal has changed in the new direction by an amount equal to the
hysteresis.
I/O: Input/Output. Refers to the electronic hardware where the field
devices are wired. Discrete I/O would have switches for inputs and relay
outputs to fire solenoid valves or pump motors. Analog I/O would have
process variable inputs, and variable controller outputs.
Integrating Process: With these loops, making a small change in the
controller ouptut, will cause the process variable to ramp until it hits a

limit. The larger the change, the faster the ramp. Also the smaller the
integral time the faster it will move. It is a common mis-conception that
integral time in the controller is not required to hold setpoint with an
integrating process. Most control loops are self-regulating. Self-regulating
means that with a change in the controller output, the process variable will
move and then settle. Integrating loops are also described as non-selfregulating. The most common example of an integrating processes is Tank
Level.
Integral Action: The "I" part of the PID controller. With integral action,
the controller output is proportional to the amount andduration of the
error signal. If there is more integral action, the controller output will
change more when error is present. If your units on integral are in
"time/rep" or "time" then decreasing your integral setting will increase
integral action. If your units on integral are in "rep/time or "1/time" then
increasing your integral setting increases integral action.
Load Upset: An upset to the process (that is not from changing the setpoint). A simple example: you are taking a shower and someone flushes
the toilet. The temperature suddenly changes on you, the controller.
Another example: you are injecting steam into flowing cold water to get
lukewarm water, and the inlet cold water changes temperature.
Lag Time: Lag time is the amount of time after the dead time that the
process variable takes to move 63.3% of its final value after a step change
in valve position. Lag time is also called a capacity element or a first order
process. Very few real processes are pure lag. Almost all real processes
contain some dead time.
Measurement: Same as "process variable."
Manual Mode: In manual mode, the user sets the output. See Mode.
Mode: Auto, manual, or remote. In auto mode the controller calculates the
output based its calculation using the error signal (difference between
setpoint and PV). In manual mode, the user sets the output. In remote, the
controller is actually in auto but gets its setpoint from another controller.
MMI: Man Machine Interface. Also known as "HMI" or Human-Machine
Interface. Refers to the software and hardware that the process operator
"sees" the process with. An example MMI screen may show you a tank
with levels and temperatures displayed with bar graphs and values. Valves
and pumps are often shown and the operator can "click" on a device to
turn it on, off or make a setpoint change. Examples are Intellution's FIX
DMACS, Wonderware's Intouch, Genesis's ICONICS, TA Engineering's
AIMACS, and Intec's Paragon.
Open Loop: Controller in manual mode. See Mode.
OPC or OLE for Process Control is a standard set by the OPC
Foundation for fast and easy connections to controllers. ExperTune Inc, is
an OPC Foundation Member.

Output: Output of the PID controller. In auto mode the controller


calculates the output based its calculation using the error signal
(difference between setpoint and PV). In manual mode, the user sets the
output.
Phase Margin: The difference in phase at the frequency where the
combined process and controller amplitude ratio is 0 is the PHASE
MARGIN.
PID Controller: Controllers are designed to eliminate the need for
continuous operator attention. Cruise control in a car and a house
thermostat are common examples of how controllers are used to
automatically adjust some variable to hold the process variable (or
process variable) at the set-point. The set-point is where you would like
the process variable to be. Error is defined as the difference between setpoint and process variable.
(error) = (set-point) - (process variable)
The output of PID controllers will change in response to a change in
process variable or set-point.
pH: A measure of how acidic or basic a solution is. pH is often a process
variable to control.
PLC: Programmable Logic Controller. These computers replace relay logic
and usually have PID controllers built into them. PLCs are very fast at
processing discrete signals (like a switch condition). The most popular PLC
manufacturers are Allen Bradley, Modicon, GE, and Siemens (or TI).
PV or Process Variable: What you are trying to control: temperature,
pressure, flow, composition, pH, etc. Also called the measurement.
Proportional Band: The "P" of PID controllers. With proportional band,
the controller output is proportional to the error or a change in process
variable. Proportional Band = 100/Gain.
Proportional Gain: This is the "P" part of the PID controller. See gain. (of
the controller). (Proportional gain)=100/(Proportional Band).
PV Tracking: An option on many controllers. When a control loop is in
MANUAL, with PV Tracking turned on, the controller setpoint will follow the
PV. When the loop is returned to AUTO, there is no sudden movement of
the process, because the PV is already at setpoint. If PV Tracking is turned
off, returning to AUTO will drive the loop to its previous setpoint.
Rate: Same as the derivative or "D" part of PID controllers.
Register: A storage location in a PLC. The ExperTune PID Tuner needs to
know certain register addresses to tune loops in PLCs.
Regulator: When a controller changes a process variable to move the
process variable back to the setpoint, it is called a regulator.

Reset: Same as the integral or "I" part of PID controllers.


Reset Windup: With a simple PID controller, integral action will continue
to change the controller output value (in voltage, air signal or digital
computer value) after the actual output reaches a physical limit. This is
called reset (integral) windup. For example, if the controller is connected
to a valve which is 100% open, the valve cannot open farther. However,
the controller's calculation of its output can go past 100%, asking for more
and more output even though the hardware cannot go past 100%. Most
controllers use an "anti-reset windup" feature that disables integral action
using one of a variety of methods when the controller hits a limit.
Robust: A loop that is robust is relatively insensitive to process changes.
A less robust loop is more sensitive to process changes. See a
presentation on Loop Stability, The Other Half of the PID Tuning Story
Sample Interval: The rate at which a controller samples the process
variable, and calculates a new output. Ideally, the sample interval should
be set between 4 and 10 times faster than the process dead time. See a
presentation on What Sample Interval Should I Use?
Set-Point: The set-point is where you would like the process variable to
be. For example, the room you are in now has a setpoint of about 70
degrees. The desired temperature you set on the thermostat is the
setpoint.
Servo: When a controller changes a process variable to move the process
variable in response to a setpoint change, it is called a servo.
Time Constant: Same as lag time.

Dead Time versus Time Constant


The dynamic response of self-regulating processes can be described reasonably
accurately with a simple model consisting of process gain, dead time and lag (time
constant). The process gain describes how much the process will respond to a
change in controller output, while the dead time and time constant describes how
quickly the process will respond.
Although the dead time and time constant both seem to describe the same thing,
there are several fundamental differences between how dead time and time constant
affects a control loop. The first difference is that dead time describes how long it
takes before a process begins to respond to a change in controller output, and the
time constant describes how fast the process responds once it has begun moving.
Measuring the Dead Time and Time Constant of a Process
Lets begin with the measurement of dead time and time constant of a self-regulating
process. Typically, one will place the controller in manual control mode, wait for the

process variable to settle down, and then make a step change of a few percent in the
controller output. At first the process variable does nothing (dead time) and then it
begins changing (time constant) until finally it settles out at a new level.

Measuring Dead Time and Time Constant


To measure the dead time and time constant, draw a horizontal line at the same level
as the original process variable. Well call this the baseline. Then find the maximum
vertical slope of the process variable response curve. Draw a line tangential to the
maximum slope all the way to cross the baseline. Well call this crossing
the intersection.
- The process dead time is measured along the time axis as the time spanned
between the step change in controller output and the intersection.
Next, measure the total change in process variable. Then find the point on the
process response curve where the process variable has changed by 0.63 of the total
change in process variable. Well call this point P63.
- The process time constant is measured along the time axis as the time spanned
between the intersection (described previously) and P63.
Dead Time versus Time Constant
We can draw a chart with a continuum of dead time through time constant (see figure
below). Processes woth dynamics consisting of pure dead time will be on the left and
pure lag (time constant) on the right. In the middle the process dead time will
equal its time constant.
Well find that flow loops and liquid pressure loops fall just about in the middle of the
continuum, because their dead time and time constant are almost equal. Gas
pressure and temperature loops will be located more toward the right they are lag
(time constant) dominant. Serpentine channels in water treatment plants and
conveyors with downstream mass meters will appear on the left side they are deadtime dominant.
Level loops should actually be treated differently, but can be approximated on the
continuum by replacing the time constant with their residence time (time they will take

to fill or empty out at full flow rate.) Most level loops will be located far to the right,
having relatively short dead times.
The ratio of dead time to time constant affects the controller modes and tuning rules
we use, the controllability of the process, and the minimum possible loop settling
time.

A continuum from pure Dead Time to pure Lag


Controller Modes
The derivative control mode works well where process variables continue to move in
the same direction for some time, i.e. lag-dominant processes. Derivative control
does not work well on processes where the process variable changes sporadically
typically processes with relatively short time constants, located in the middle and to
the left on the continuum.
Applicability of Tuning Rules
Most tuning rules will work on lag-dominant processes. However, the ZieglerNichols rules have only a narrow range of applicability. Lambda / IMC tuning rules
apply to a broader spectrum of processes, while Cohen-Coon has the widest
coverage. The Dead-Time tuning rule, applies to processes on the left, as its name
implies.
Controllability
Lag-dominant loops are easier to control than dead-time-dominant loops. Operators
find that lag-dominant processes respond much more intuitively than dead-timedominant processes and are easier to control in manual mode.
Loop Settling Time
When tuning a loop for the shortest possible settling time, one finds that there is a
minimum limit on settling time. If you tune the controller any tighter, the loop will
begin oscillating. The minimum settling time depends mostly on the amount of dead
time in a control loop, and will be between two and four times the length of the dead

time. The ratio of time constant to dead time determines where the minimum settling
time falls between two and four times the process dead time.
Fascinating stuff, right? To learn more, consider getting an in-house
training workshop for you and your colleagues.
Let me know if you have questions, and feel free to leave a comment.

Tuning Rule for Dead-Time Dominant Processes


December 15, 2010
Processes with lags or time constants (tau) longer than their dead times (td) are
reasonably easy to tune. Most tuning rules work well for processes where tau > 2 td
(lag dominant). The opposite is not true. Many tuning rules work very poorly when td
> 2 tau (dead-time dominant).
Lag Dominant
When a process has a time constant that is much longer than the dead time,
problems like overshoot and having to use high controller gains begin to appear.
However, loops with long time constants still act in an intuitive way if we add more
control action we can make the process respond faster, like stepping down harder
on the accelerator will get our car to the desired speed quicker.
Dead-Time Dominant
On the other side of the spectrum, when a process dead time is significantly longer
than its time constant, it behaves much less intuitively adding more control action
does not make the process respond faster. For example, if your shower water is a
little cold, opening the hot water tap a lot more is not going to get you to the right
temperature any quicker, and it is going to have some serious side-effects.
I once saw several operators struggle to manually control the outlet temperature of a
three-pass kiln. The kiln was a dead-time dominant process and its dead time
was about 10 minutes long. The operators would notice the temperature is below set
point and increase the firing rate. When they see no effect, they increase the firing
rate more. And then some more, and more. Finally, when changes have made their
way through the dead time, the temperature overshoots its set point by a large
margin. Then the operators take the same actions and make the same mistakes in
the opposite direction.
Needless to say, controller tuning also becomes difficult on dead-time dominant
processes.
Tuning

Step response of a dead-time dominant process.


You will find that the Ziegler-Nichols tuning rules dont work well at all on a deadtime dominant process. For example, the following process characteristics were
measured from the step-response of a dead-time dominant process in the previous
plot:
td = 0.276 minutes
tau = 0.013 minutes
gp = 0.89
Applying the Ziegler-Nichols tuning rules to this process gives the following controller
settings: Kc = 0.05; Ti = 0.92 minutes. The result is an extremely sluggish control
loop (see below).

Dead-time dominant loop tuned with the Ziegler-Nichols tuning rules.


Processes with time constants (tau) longer than their dead times (td) are reasonably
easy to tune. Most tuning rules work well for processes where tau > 2 td (lag
dominant). The opposite is not true. Most tuning rules work very poorly when td > 2
tau (dead-time dominant).
The Lambda tuning rules were designed for lag dominant processes and do not work
all that well on dead-time dominant processes either. The Cohen-Coon tuning
rules work much better than the Ziegler-Nichols rules, but they too arent the best
tuning rule when the dead time is five or ten times as long as the time constant.

So what type of tuning rule will work well for controlling dead-time dominant
processes? First, we need a lag-dominant controller, to make up for the absence of
lag in the process. But if we just crank up the integral term, the loop will become
unstable. So, second, we have to compensate by decreasing the controller gain.
The Cohen-Coon PI tuning rules will work reasonably well up to td = 2 tau, but it
becomes sluggish after that. When td > 2 tau, it is better to use the dead-time tuning
rule. It is as follows:
Kc = 0.36 / (gp * SM)
Ti = td / 3
No derivative.
SM is the stability margin and can be set to a value between 1 and 4. A value of 1 is
equivalent to the 1/4-amplitude damping response. It is considered unsafe the loop
is very sensitive to changes in process conditions. A value of 2 or higher is
recommended. It will reduce the overshoot, eliminate unnecessary cycling, and make
the loop far more robust to changes in process conditions.
Hint: measure dead time in the same units of time as your controllers integral
setting. E.g. if your controllers Ti setting is in minutes, measure td in minutes.
Notes:
- The tuning rules above are designed to work on controllers with interactive or noninteractive algorithms, but not controllers with parallel algorithms.
- Furthermore, they will work only on controllers with a controller gain setting and not
a proportional band (found on Foxboro I/A controllers, for example).
- The rules assume the controllers integral setting is in units of time (minutes or
seconds), and not integral gain or rate (repeats per minute or repeats per second).
If your controller is different, parameter conversions will allow you to use these rules.
Applying the dead-time tuning rules to the process described above gives the
following controller settings: Kc = 0.2; Ti = 0.092 minutes. The result is significantly
better than what can be obtained with other tuning rules.

Dead-time dominant loop tuned with the Dead-Time tuning rules.

Better loop response can be obtained with a Smith Predictor, but that is more
complex to implement, more tedious to tune, sensitive to changes in process
characteristics, and perhaps the topic of a future blog.

Process Controllers
Basic process controllers with proportional, integrating and
derivative functions
Sponsored Links

Basic Controller
The Basic Controller for an application can be visualized as

The controller consists of

a measuring unit with an appropriate instrument to measure the state of process,


a temperature transmitter, pressure transmitter or similar.

a input set point device to set the desired value.

a comparator for comparing the measured value with the set point, calculating the
difference or error between the two.

a control unit to calculate the output magnitude and direction to compensate the
deviation from the desired value.

a output unit converting the output from the controller to physical action, a control
valve, a motor or similar.

Controller Principles

The Control Units are in general build on the control principles

proportional controller

integral controller

derivative controller

Proportional Controller (P-Controller)


One of the most used controllers is the Proportional Controller (P-Controller) who
produce an output action that is proportional to the deviation between the set point and
the measured process value.
OP = -kP Er

(1)

where
OP = output proportional controller
kP = proportional gain or action factor of the controller
Er = error or deviation between the set point value and the measured value
The gain or action factor - kP

influence on the output with a magnitude of kP

determines how fast the system responds. If the value is too large the system will
be in danger to oscillate and/or become unstable. If the value is too small the
system error or deviation from set point will be very large.

can be regarded linear only for very small variations.

The gain kP can be expressed as


kP = 100 / P

(1b)

where
P = proportional band
The proportional band P, express the value necessary for 100% controller output. If P =
0, the gain or action factor kP would be infinity - the control action would be ON/OFF.
Note! A proportional controller will have the effect of reducing the rise time and will
reduce, but never eliminate, the steady-state error.

Integral Controller (I-Controller)


With integral action, the controller output is proportional to the amount of time the error is
present. Integral action eliminates offset.
OI = - kI (Er dt)

(2)

where
OI = output integrating controller
kI = integrating gain or action factor of the controller
dt = time sample
The integral controller produce an output proportional with the summarized deviation
between the set point and measured value and integrating gain or action factor.
Integral controllers tend to respond slowly at first, but over a long period of time they tend
to eliminate errors.
The integral controller eliminates the steady-state error, but may make the transient
response worse. The controller may be unstable.
The integral regulator may also cause problems during shutdowns and start up as a
result of the integral saturation or wind up effect. An integrating regulator with over
time deviation (typical during plant shut downs) will summarize the output to +/- 100%.
During start up the output is set to 100%m which may be catastrophic.

Derivative Controller (D-Controller)


With derivative action, the controller output is proportional to the rate of change of the
measurement or error. The controller output is calculated by the rate of change of the
deviation or error with time.

OD = - kD dEr / dt

(3)

where
OD = output derivative controller
kD = derivative gain or action factor of the controller
dEr = deviation change over time sample dt
dt = time sample
The derivative or differential controller is never used alone. With sudden changes in the
system the derivative controller will compensate the output fast. The long term effects the
controller allow huge steady state errors.
A derivative controller will in general have the effect of increasing the stability of the
system, reducing the overshoot, and improving the transient response.

Proportional, Integral, Derivative Controller (PID-Controller)


The functions of the individual proportional, integral and derivative controllers
complements each other. If they are combined its possible to make a system that
responds quickly to changes (derivative), tracks required positions (proportional), and
reduces steady state errors (integral).
Note that these correlations may not be exactly accurate, because P, I and D are
dependent of each other. Changing one of these variables can change the effect of the
other two.

Controller Response

Rise Time

Overshoot

Settling Time

Steady State Error

Decrease

Increase

Small Change

Decrease

Decrease

Increase

Increase

Eliminate

Small Change

Decrease

Decrease

Small Change

http://blog.opticontrols.com/site-map

Control Notes
Reflections of a Process Control Practitioner

Below are the contents of the Control Notes website, not the contents of the book.
To see the contents of the book, click the books image on the right, and then click the Look Inside link.

1. General
o

Introduction to Control Systems and Optimization

Quarter Amplitude Damping

Settings in the Controller were Closer than they Appeared

Tools of the Tuner

2. Process Characteristics
o

Causes of Dead Time in a Control Loop

Dead Time versus Time Constant

Inverse Response

Level Control Loops

3. PID Controllers
o

Bumpless Transfer and Bumpless Tuning

Derivative Control

Gap Control

PID Controller Algorithms

PID Controllers Explained

Settings in the Controller were Closer than they Appeared

Unraveling Controller Algorithms

4. Controller Tuning
o

Cohen-Coon Tuning Rules

Comments on the Ziegler-Nichols tuning method

Detuning Control Loops

Is Lambda a Bad Tuning Rule?

Lambda Tuning Rules

Level Controller Tuning

Minimum IAE Tuning Rules

Quarter Amplitude Damping

Surge Tank Level Control

Tank Level Tuning Complications

Tuning Rule for Dead-Time Dominant Processes

Typical Controller Settings

When to Use which Tuning Rule

Ziegler-Nichols Closed-Loop Tuning Method

Ziegler-Nichols Open-Loop Tuning Rules

5. Control Valves
o

Butterfly Valves and Control Performance

Control Valve Linearization

Control Valve Problems

Equal Percentage Control Valves and Applications

Valve Diagnostics on a Level Loop

6. Loop Performance, Problems, and Diagnostics


o

An Oscillating Level Control Loop

Butterfly Valves and Control Performance

Caster Level Control Improvement

Control Loop Performance Monitoring

Control Valve Problems

Diagnosing and Solving Control Problems

Q&A on Loop Performance

Valve Diagnostics on a Level Loop

7. Control Strategies
o

A Tutorial on Cascade Control

A Tutorial on Feedforward Control

Butterfly Valves and Control Performance

Caster Level Control Improvement

Control Valve Linearization

Drum Level Control

Improving pH Control

Ratio Control

Steam Temperature Control

8. Case Studies
o

A pH Control Success Story

An Oscillating Level Control Loop

Butterfly Valves and Control Performance

Caster Level Control Improvement

Flow Control Conundrum

How to Fill a Container

Inverse Response

Level Versus Flow Control

Pressure and Flow Control Loop Interaction

Process Oscillations from Afar

Ratio Control

Tank Level Tuning Complications

9. Tips and Work-Process


o

Best Practices for Control Loop Optimization

Diagnosing and Solving Control Problems

Process Control for Practitioners

Testing Control Loop Performance

Tools of the Tuner

Tuning Tips - How to Improve Your Results

When to Use which Tuning Rule

Why Tuning Rules Don't Always Work

1. General
o
Introduction to Control Systems and Optimization
o
Quarter Amplitude Damping
o
Settings in the Controller were Closer than they Appeared
o
Tools of the Tuner

Introduction to Control Systems and Optimization


January 2, 2010

Since my company, OptiControls Inc, specializes in the optimization of process control systems, I
thought it would be appropriate to begin my blog with a non-technical description of the problem it
solves for customers, and how it is done.
What is automatic control?

Cruise Control
One of the best-known domestic examples of automatic controls would be the cruise control of a motor
car. The cruise control keeps the cars speed constant, despite road gradient and wind direction. When
the road runs uphill or downhill, the cruise control automatically changes the accelerator position to keep
the cars speed constant.
Similarly, industrial processes have automatic control systems for keeping them under control and
maintaining all process conditions close to their specified operating levels. For example, at a power
station, the water level in the boiler, steam temperature, and steam pressure (as well as many other
items) are kept in check by the automatic control system. Complex process plants can have hundreds or
even thousands of individual temperatures, flows, levels, pressures and other conditions that are
controlled simultaneously.

Simple Flow Control Loop


At the core of an automatic control system are individual controllers each controlling one aspect of the
process. Each controller monitors a specific process condition via feedback from a sensor and
compares it to the desired value (set point). The controller tries to correct any difference between the
measurement and set point by changing its output to the process, which changes the position of a final
control element (like a valve) and drives the process back towards the set point. This loop consisting of
the measurement, controller, final control element, and process, is called the control loop.
So whats the challenge with this?

Three-Loop Controller
Industrial controls need to be properly tuned to do a good job of regulating all the process conditions.
Improperly tuned controls can cause unsafe process conditions, poor product quality, unnecessary plant
shut-downs, longer start-up times and higher operating and maintenance costs.
An example of where humans act like controllers is when we regulate the water temperature while
taking a shower. If the water is too cold, we open the hot water tap a bit and when the water is too hot
we close it a bit. And we all know from experience how important it is to turn the tap the right amount
and at the right speed. If we turn it too much or too fast we will get burnt or chilled, if we turn it too little
or too slow we will be uncomfortable for a longer period.
A controller has adjustable settings that govern the magnitude and rate of the changes it will make to the
process. The magnitude and rate of the controllers output changes should be optimized for the
dynamics of the process it is controlling. If the controller reacts too fast the process will overshoot its set
point. If the controller reacts too slow it will take too long to get to set point. Getting the right tuning
settings for some complex processes can be quite challenging.
How well are industrial controls performing?

Unfortunately, poorly functioning controls are very common in industry. Various studies have shown that
up to 30% of controllers do not function in automatic control mode at all, while another 30% of control
loops function quite poorly in automatic control mode.
In many cases the problems exist because the personnel who originally installed the control system
were not well skilled at optimizing the controls. The controllers were tuned very roughly and only well
enough to get the process up and running, frequently leaving much room for improvement. In addition to
this bad start, process dynamics often change during operation, and the integrity of control equipment
deteriorates over time, which further reduces the effectiveness of the controls.
How should automatic controls be optimized?
You should always do control loop optimization in a systematic way, working closely with the process
operator and process engineer. Before a controller is tuned, the purpose of the control loop and the
control objective are established. Then the design of the control loop is reviewed and diagnostic tests
are run to ensure proper performance of the measuring device and final control element. Assuming no
problems are found, the controller is tuned to work in harmony with the dynamics of the process it is
controlling, and to meet the overall control objective of the loop.
The dynamic behavior of the process is determined by analyzing data from a simple process response
test. Appropriate controller settings are calculated using tuning formulas or a computer program. Finally,
the new settings are entered into the controller and one or more response tests are done to ensure the
process is being controlled properly and that the control objective is met. Ideally, the controllers
performance will be monitored periodically for a few days after tuning and under different process
conditions to verify improved operation.

Quarter Amplitude Damping


October 4, 2013

Quarter-amplitude damping is likely the best-known tuning objective, but its a poor choice for process
stability. Also called quarter amplitude decay or QAD, many tuning rules, including the famous ZieglerNichols and Cohen-Coon tuning rules, were designed for this objective. The idea behind quarteramplitude damping is to eliminate any error between the setpoint and process variable very fast. In
fact, the controller responds so fast that the process variable actually overshoots its setpoint and
oscillates a few times before it finally comes to rest (Figure 1). The deviation from setpoint gets smaller
with each successive cycle at a ratio of 4:1. In Figure 1, the ratio of B/A = 1/4.

Figure 1. A quarter-amplitude-damping response after a process disturbance.

When developing their tuning rules, Ziegler and Nichols chose quarter-amplitude damping to be
optimum control loop response. Although QAD performance lies in the middle between a completely
dead controller and an unstable control loop, you should realize that quarter-amplitude damping, by
design, causes the process to overshoot its set point and to oscillate around it a few times before
eventually settling down. Practitioners with solid experience in controller tuning will all tell you that

quarter-amplitude-damping is a very poor choice for tuning industrial control loops.Problems with
Quarter-Amplitude Damping
Although the quarter-amplitude damping tuning objective provides very fast rejection of disturbances, it
creates three problems:
1.

It makes the loop very oscillatory, often causing interactions with similarly tuned loops. If control
loops in a highly interactive process, such as a paper machine, power plant boiler, or
hydrodealkylation process, are tuned for quarter-amplitude damping, oscillations affecting the
entire process often occur.

2.

It causes a loop to overshoot its setpoint when recovering from a process disturbance and after
a setpoint change. Many processes cannot tolerate overshoot.

3.

QAD-tuned loops are not very stable and have low robustness. They can very easily become
completely unstable if the process characteristics change. For example, such a loop will become
unstable if its process gain doubles, which can happen very easily in industrial processes.

Solution
An easy way to minimize all three problems is to reduce the controller gain (detune the controller). The
minimum reduction I recommend is to use the calculated Kc divided by two (or more if necessary). For
example, if a quarter-amplitude-damping tuning rule suggests using a controller gain of 0.9, then use
0.45 instead. This will greatly reduce oscillations and overshoot in the control loop, and it will increase
the loops robustness by a factor of two. (Please note that if your controller uses a parallel algorithm, you
have to reduce Kp, Ki, and Kd to achieve the equivalent effect).
Stay tuned!
Jacques Smuts
Principal Consultant at OptiControls, and author of Process Control for Practitioners.

Posted in 1. General, 4. Controller Tuning

One Response to Quarter Amplitude Damping

Don Parker:
October 8, 2013 at 5:11 pm
Jacques,
I have worked with boiler/turbine controls for many years and could not agree more. So many
of the processes are interactive that they must be tuned wthout oscillation, generally with
maximum overshoot of about 5%.
Of course there is also the problem of over-active actuators, which can cause premature
aging, wear, linkage hysteresis, etc.
I have found Lambda tuning to be a very successful method for many power plant control
loops.

Settings in the Controller were Closer than they


Appeared
May 17, 2012

Before I do step-testing to analyze and tune a control loop, I always take a look at the current tuning
settings in the controller.

The controllers gain setting gives me some indication of the sensitivity of the process, e.g. if
the controller gain is 0.1 the process could be very sensitive to controller output changes.

The controllers integral time gives me an idea of the speed of the process dynamics, i.e. a
short integral time usually means fast process dynamics and vice versa.

The derivative time (if used) can reveal if the last person tuning the loop lacked understanding
of the tuning process, e.g. if the derivative time is set to more than half the integral time, or less
than one-eighth of it.

Earlier this month I optimized control loops on an oil platform. A few of the loops were oscillating. One of
the oscillating loops, a gas pressure control loop on a separator, had a controller gain of 16! I facetiously
told the control engineer: Well, theres your problem! A value of 16 did seem like an abnormally large
controller gain, but I know there are many exceptions from normal in process control.
A closer look revealed the reason for the high controller gain. Even though the set point was set to the
normal operating pressure of 200 PSI, the pressure transmitter was calibrated to measure between 0
and 4000 PSI. So the operating pressure was at only 5% of the measurement range! A more
appropriate measurement range would have been 0 400 PSI, since the maximum design pressure for
the vessel was 380 PSI. In this case, the calibration range was ten times larger than it should have
been. Considering that the measurement span was ten times over-ranged, the controller gain had to be
ten times larger than normal to compensate. This means the effective gain of the controller was only
1.6, which is a reasonable value, especially for gas pressure control. In other words, the high controller
gain was not responsible for causing the control loop instability. It turned out that the control loop was
oscillating because of control valve stiction.
Based on these findings I recommended a replacement / recalibration of the pressure transmitter and
the subsequent reranging of the signals in the DCS. After doing this, the controller gain must be set to
1.6. I also recommended that the sticky control valve be repaired or replaced to fix the oscillations.
The high controller gain cancelled out by the large measurement span reminded me of the warning on a
passenger-side rear-view mirror: Objects in the mirror are closer than they appear.

Objects in the mirror are closer than they appear.

Learn more about controller settings from the book Process Control for Practitioners.
Try it out for yourself using the OptiControls Loop Simulator.
Stay Tuned!
Jacques Smuts
Founder and Principal Consultant
OptiControls Inc.

Tools of the Tuner


July 8, 2013

A control loop tuner should be proficient in using a variety of tools to be effective in any tuning situation.
Customers often ask me how I tune loops, and my answer is that I use several tools depending on the
situation. Here is an overview of the tools I frequently use when analyzing and optimizing control loops.

Process Historian
Indispensable for much more than tuning, the process historian is one of my most-used tools. I use it to
check valve linearity, analyze process interactions, compare loop performance before and after tuning,
design feedforward controllers and characterizers, and to analyze the step-response of a process for
tuning the controller.
Some plants where I work have no OPC connection for retrieving real-time process data, or dont allow
installing data collection software to collect real-time process data. Then their process historian is my
only way to access plant data. I often analyze the step response for tuning purposes using the
historians user interface, but if it is easy enough to export data to Excel, I will go that route and analyze
the data using tuning software on my laptop.
For fast-responding loops, I ask the system administrator to speed up the sampling rate, because the
default sampling interval on most historians is 30 to 60 seconds, which is too slow for analyzing fast
loops. A one-second sampling interval is required for flow and liquid pressure loops, five seconds for
most other loops, while 30 to 60 seconds serve only the slowest loops.
Some control loops I work on have processes that take hours to respond. In more than half of these
cases I can go back in history and find sufficiently large operator-induced step changes that I can use
for analysis and tuning. That saves me from having to do step tests and wait hours for the process to
respond. I always try to get at least three of these step changes, but I prefer to have more if the process
models change from one step-test to the next. This saves me a lot of time on slow-responding
processes because the complete response is already in the historian. This also minimizes the need for
disturbing the process with additional step tests.

Process Historian

Excel
When I analyze step-test data directly on the historian, I use a pre-built Excel spreadsheet to simplify
the data analysis and controller tuning calculations. I take down a few readings from the historian and
enter them into the spreadsheet, and it calculates the process characteristics, and recommends tuning
settings. It supports self-regulating and integrating process types, and has Cohen-Coon, ZieglerNichols, Lambda/IMC, Dead-Time, Surge-Tank, and Level-Averaging tuning rules. It also allows me
to speed up or slow down the loop response by calculating different tuning settings, based on my tuning
objective. Every thing I need for my tuning calculations!

Excel Tuning Calculator

Loop Simulation Software


Loop Explorer is a simulation and tuning software tool that I developed to give me insights into how a
loop would respond to setpoint changes and disturbances. This is essential for obtaining optimal tuning
settings for the loops control objective. The simulator is especially handy when I use the spreadsheet to
analyze the step response, since the spreadsheet does not have its own simulator. I also use the Loop
Explorer software in my training classes to demonstrate many concepts related to process
characteristics, PID controllers, and controller tuning.

Loop Explorer Software

Tuning Software
Of course I also use commercial tuning software. I recommend that every plant who does tuning inhouse invest in good tuning software and have it accessible in every control room. If I work at a plant
that already has high-end tuning software installed, I use their software. Otherwise I use the tuning

software I have on my laptop. High-end tuning software applications analyze process response and
automatically identify process characteristics. They provide access to different tuning methods, and
render simulations of loop response with the new tuning settings. They also have databases of controller
types, so one doesnt have to deal with manually converting tuning constants to suit a specific controller.
One very important point: Tuning software is just a tool and is no substitute for understanding process
dynamics, PID controllers, and the tuning process. If you cant tune control loops by manually
determining process characteristics from step-response data, and applying an appropriate tuning rule to
calculate tuning constants, you will likely not be successful with software either.

Operator Time Trends


When I do step testing, I mostly sit right next to the operator. Then we use his/her real-time trends for
the control loop to monitor the response. When sitting next to the operator I can point to certain
anomalies, and explain why I do certain tests. It is also a great time to get to know the operator, learn
about the process he controls, and become familiar with the culture of the company.

P&ID and Operator Graphics


Before analyzing and tuning a control loop, I ask the operator to explain the process to me. He/she will
often use their operator graphics to show me the streams into and out of the process, and the location of
valves, pumps, heat exchangers etc. Process engineers will often give me a set of P&IDs that I refer to.
In several occasions I discovered that other interacting or subordinate loops have to be tuned first, or
placed in manual, before I could attend to the loop of concern. I also find flow measurements, not being
used for control, that I can trend for supplemental information on the control valves performance, or if
there might be a need for implementing cascade control.

Operator Graphic

Pen, Paper, and Calculator


And dont forget the traditional pen, paper and calculator. I find it handy and convenient to quickly draw
a diagram on paper, take notes, or to quickly run through calculations. I would often transfer my written
notes to electronic format for inclusion in my report after the days tuning, or while waiting for step-test
results on a slow loop.

Hand Calculations

Process Walk-Down
Whenever possible, I go out to the plant with an experienced operator or engineer to take a look at the
process, equipment, and physical location and condition of the control valves and instrumentation. One
time I was dealing with a vastly oversized nitrogen injection control valve that was used to control
pressure on a distillation column. The loop was completely unstable, regardless of any tuning settings
we tried. We tried making 0.1% steps in controller output with the controller in manual mode. Stepping
the controller output upwards from 1.5% to 2.4% the column pressure showed no response (no physical
change in valve opening), but at 2.5% the pressure sharply decreased. When the operator and I went
out to the valve and radioed back to the control room to repeat the test, we noticed that the valve
position bumped by about 5% instead of the 0.1% change in controller output. We would never have
known this if we did not go to the valve. After the faulty positioner was replaced we could stabilize the
loop. (However, control was still poor because the valve was grossly oversized.)

Process Walkdown

Literature

I have several really good books on process control, instrumentation, control valves, processes, PID
controllers, and tuning. Some of them are academically inclined, making them virtually useless for
tuning controllers in real plants. But some others are much more practical in nature. The latter is
obviously more suitable for practitioners. I track the sales of eight of these practical books on
amazon.com and the top seller, Process Control for Practitioners, has sold more copies over the last
two years than the next three books together.

Summary
Even though I am a big proponent of tuning software, it is not the only tool available for analyzing and
tuning control loops. It is important to consider the situation, and use the most appropriate tool or
technique for analyzing and optimizing control loops even if it comes down to doing manual
calculations on a piece of paper.

2. Process Characteristics
o
Causes of Dead Time in a Control Loop
o
Dead Time versus Time Constant
o
Inverse Response
o
Level Control Loops

Causes of Dead Time in a Control Loop


October 18, 2010

I always cover process characteristics as part of the process control training classes I present. Its
necessary for understanding process behavior and controller tuning.
The picture below shows the typical response of a self-regulating process after a step change in
controller output. The process dead time (td) follows the change in controller output (CO). The process
time constant is indicated with the Greek symbol (tau).

Dynamic Process Response to a Step Test

During the discussion on process characteristics, I show students how dead time affects the minimum
settling time of a control loop. Even with the best possible tuning, a loop will still need a minimum of four
times the dead time to settle out after a set point change or a disturbance. (Some people say a loop
needs the equivalent of 10 dead times to settle, but appropriate tuning can normally do better than that if
speed is the objective.)

Loop Response after a Disturbance

During a training class on controller tuning that I recently presented, one of the students pondered the
relationship between dead time and minimum settling time for a while and then asked me how one can
decrease the dead time of a process. I answered that the length of dead time is mostly determined by
the process design, but if you consider all the contributors to dead time, there might be some of them
you can reduce or eliminate.
Here is a list of contributors to dead time:

Actual process transportation lag. This is the time it takes your control action to progress
through the process equipment and reach the sensor. There is seldom something you can do
about transportation lag, but in some cases you may be able to move the sensor closer to the
control action to shorten the time delay.

Small lags in control loop. Although these are technically not true dead time, small lags
increase the apparent dead time of a loop, and has the same effect on tuning and settling time as
true dead time. Small lags creep in all along the control loop, and can be a significant contributor
to overall dead time:
o

Thermowell thickness. Use the thinnest allowable thermowell for the fastest response.

Thermocouple or RTD response time. Use fast-responding devices to reduce dead


time. For example, grounded thermocouples respond significantly faster than ungrounded
ones.

Tightness of fit of thermocouple or RTD. A less-than-tight fit of a temperature sensor


inside a thermowell can add an enormous lag to the control loop. Consider using heat
transfer compound to improve temperature response if conditions allow.

Instrument dampening or filtering. Unless you have a good reason for using
instrument dampening or filtering, turn this feature off or set it to zero.

Pneumatic tubing. 500 feet of tubing has a lag of about 4 seconds. This is very
long, considering that a 4-20 mA signal will have no delay along the same length. There is
very little reason to still have long runs of pneumatic tubing in plants today.

Old positioner. Positioners used to be so slow that they were not recommended for
use on valves in flow control loops. Nowadays they respond very fast. If you have old
positioners on a loop you want to tune faster, consider replacing it with a new, fast
positioner.

Slew rate of valve. A control valve can take a considerable time to slew to a new
position. The larger the position change, the longer it takes to get there. Installing highvolume positioners can dramatically shorten the slew time on slow valves.

Velocity limiting of controller output. Some controllers are set up to limit the rates at
which their outputs change. This may be necessary to protect process equipment, but
consider setting this as fast as allowed by the equipment.

Controller scan interval. The periodic-execution nature of a digital controller will add an average
dead time of one half of the scan interval to the dead time of a loop.

Analyzer sampling time. Similar to controller scan interval, but normally much longer in
duration. If an analyzer samples the process every 5 minutes, the periodic sampling adds an
average of 2.5 minutes to the loop dead time.

Some of these contributors to dead time may seem small or even trivial, but if you consider that 5
seconds of additional dead time increase the minimum loop settling time by 20 seconds or more, the
value in finding and eliminating the small lags in a loop is more obvious.
Contact me if you have any questions.

Dead Time versus Time Constant


June 21, 2011

The dynamic response of self-regulating processes can be described reasonably accurately with a
simple model consisting of process gain, dead time and lag (time constant). The process gain describes
how much the process will respond to a change in controller output, while the dead time and time
constant describes how quickly the process will respond.
Although the dead time and time constant both seem to describe the same thing, there are several
fundamental differences between how dead time and time constant affects a control loop. The first
difference is that dead time describes how long it takes before a process begins to respond to a change
in controller output, and the time constant describes how fast the process responds once it has begun
moving.
Measuring the Dead Time and Time Constant of a Process
Lets begin with the measurement of dead time and time constant of a self-regulating process. Typically,
one will place the controller in manual control mode, wait for the process variable to settle down, and
then make a step change of a few percent in the controller output. At first the process variable does
nothing (dead time) and then it begins changing (time constant) until finally it settles out at a new level.

Measuring Dead Time and Time Constant

To measure the dead time and time constant, draw a horizontal line at the same level as the original
process variable. Well call this the baseline. Then find the maximum vertical slope of the process
variable response curve. Draw a line tangential to the maximum slope all the way to cross the baseline.
Well call this crossing the intersection.
- The process dead time is measured along the time axis as the time spanned between the step change
in controller output and the intersection.
Next, measure the total change in process variable. Then find the point on the process response curve
where the process variable has changed by 0.63 of the total change in process variable. Well call this
point P63.
- The process time constant is measured along the time axis as the time spanned between the
intersection (described previously) and P63.
Dead Time versus Time Constant
We can draw a chart with a continuum of dead time through time constant (see figure below). Processes
woth dynamics consisting of pure dead time will be on the left and pure lag (time constant) on the right.
In the middle the process dead time will equal its time constant.
Well find that flow loops and liquid pressure loops fall just about in the middle of the continuum,
because their dead time and time constant are almost equal. Gas pressure and temperature loops will

be located more toward the right they are lag (time constant) dominant. Serpentine channels in water
treatment plants and conveyors with downstream mass meters will appear on the left side they are
dead-time dominant.
Level loops should actually be treated differently, but can be approximated on the continuum by
replacing the time constant with their residence time (time they will take to fill or empty out at full flow
rate.) Most level loops will be located far to the right, having relatively short dead times.
The ratio of dead time to time constant affects the controller modes and tuning rules we use, the
controllability of the process, and the minimum possible loop settling time.

A continuum from pure Dead Time to pure Lag

Controller Modes
The derivative control mode works well where process variables continue to move in the same direction
for some time, i.e. lag-dominant processes. Derivative control does not work well on processes where
the process variable changes sporadically typically processes with relatively short time constants,
located in the middle and to the left on the continuum.
Applicability of Tuning Rules
Most tuning rules will work on lag-dominant processes. However, the Ziegler-Nichols rules have only a
narrow range of applicability. Lambda / IMC tuning rules apply to a broader spectrum of processes,
while Cohen-Coon has the widest coverage. The Dead-Time tuning rule, applies to processes on the
left, as its name implies.
Controllability
Lag-dominant loops are easier to control than dead-time-dominant loops. Operators find that lagdominant processes respond much more intuitively than dead-time-dominant processes and are easier
to control in manual mode.
Loop Settling Time
When tuning a loop for the shortest possible settling time, one finds that there is a minimum limit on
settling time. If you tune the controller any tighter, the loop will begin oscillating. The minimum settling
time depends mostly on the amount of dead time in a control loop, and will be between two and four
times the length of the dead time. The ratio of time constant to dead time determines where the
minimum settling time falls between two and four times the process dead time.
Fascinating stuff, right? To learn more, consider getting an in-house training workshop for you and your
colleagues.
Let me know if you have questions, and feel free to leave a comment.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 2. Process Characteristics

4 Responses to Dead Time versus Time Constant

Tejaswinee:

October 9, 2012 at 8:01 am


Sir, you explained the method for self-regulating processes. How we calculate delay, tau and
Ts for processes which are not self-regulating?

Jacques:
October 9, 2012 at 9:01 am
Tejaswinee, please see this article on level controller tuning for determining dead time on nonself-regulating (integrating) processes. For integrating processes, process time constants
contribute to the apparent dead time, so we dont have to consider them independently. And
the estimated minimum closed-loop settling time will be four times as long as the apparent
dead time.
- Jacques

Nay:
May 21, 2013 at 12:26 am
Hi ! please help me on dead time also. For my case , pressure control PID ( reverse acting)
at first PV is higher than SP(52) , so CV is 100% open. but eventually PV goes down and
pass SP(52) , for example: PV(51.5 or 51 ) . but PID not start closing and take long time 5-15
min to start tuning. Recently , PID parameters are Kp 6.5 , Ki 0.3 and Kd 0. Kindly advise me
Thanks in advance

Jacques:
May 21, 2013 at 8:58 am
Nay, you have to do step-tests and use the processs dynamic characteristics to calculate
appropriate tuning settings.
Se this writeup for more details: Cohen Coon Tuning Rules.

nverse Response
August 5, 2013

When you push down on your cars accelerator, you expect the car to speed up, right? What if it slows
down? Or even worse: You lift your foot off the accelerator and your car speeds up. And the more you lift
your foot, the more the car speeds up. These are almost unthinkable and certainly scary situations, yet
they occur every day in thousands of boilers and some other processes around the world. The
phenomenon is called an inverse response. One of the most common occurrences of inverse response
is found in the control of boiler drum level.

Boiler Drum Level Control


In a boiler, water is converted to steam. Steam and water separates in the boiler drum, with the steam
then leaving through a pipe at the top of the drum. It is important to keep the level of water in the drum
away from this pipe or water will exit with the steam and damage downstream equipment. Even more
important is to always have some water in the drum when the boiler runs dry there is no water to cool
it, and this will result in severe damage to the boiler. So the water level in the drum is normally
maintained close to its centerline.

The drum level is controlled by adding water to the boiler, called feedwater. A closed-loop controller
looks at the drum level and if it is lower than the setpoint it opens the feedwater control valve to increase
the feedwater flow rate and vice versa (Figure 1). This brings us to inverse response

Figure 1. Boiler drum level control diagram.

Inverse Response
The temperature of the feedwater flowing into the drum is normally below boiling point. When we add
more of this colder water to the boiler, some of the steam bubbles in the boiler condense. This causes
the drum level to decrease and the effect is called inverse response. However, the effect is only
temporary. After a while, the higher rate of feedwater flow overcomes the lost volume and the drum level
rises (Figure 2). The opposite is also true: when we decrease the flow rate of the colder feedwater,
steam production increases, and the additional steam bubbles cause the drum level to rise. But after a
while, the drum level begins to fall, as expected.

Figure 2. With inverse response, the process first responds in the wrong (inverse) direction, and then in
the expected direction.
I have not seen many other processes exhibiting inverse response. Two that come to mind are
distillation column bottom level control (very similar to drum level control) and crystal size control
in certain crystallizers (see case study below).

Tuning Implications
Processes exhibiting inverse response can easily cause control loop stability problems. Using derivative
control is questionable from a stability perspective, and certainly not useful. Using a high controller gain
is not possible since it will chase the inversely responding process and create a snowball (runaway)
effect. But when you use a low controller gain on an integrating process, you also have to use a long

integral time (low integral gain). So you end up with a very slow-responding control loop, and any
attempt to speed it up significantly lowers its stability. This is why three-element control is the strategy of
choice for drum level control.
When you do step-testing on a process with an inverse response and determine the process
characteristics to tune the controller, you should treat the entire duration of the inverse response as
dead time (Figure 3). Then you can apply your usual level controller tuning rules using this pseudo dead
time.

Figure 3. Dead-time (td) measurement on an inversely responding process.

Case Study
Below is an example of dealing with inverse response in a different process.
A few weeks ago I was optimizing control loops on an ammonium sulfate crystallizer. The crystal size
was very important, but it could not be measured directly. Instead the agitator motor amps were used as
an indication of crystal size. Control was done by changing the feed rate of saturated liquid to the
crystallizer, thereby changing the residence time of slurry in the crystallizer, and subsequently changing
the crystal size. However, changes in feed rate also changed the crystallizer level, which turned out
to cause a profound inverse response on the agitator amps because of the mechanical design of this
particular crystallizer. The plant personnel was unaware of this, but the inverse
response was revealed when we did step-testing.
The control was rather poor before we got started. There were large oscillations in motor amps, caused
by large variations in crystal size. After step-testing, we calculated the new controller settings, recorded
a week of data as a baseline (because we increased the process historians sampling resolution), and
then entered the new settings. A time-trend of the control performance before and after tuning is shown
in Figure 4.

Figure 4. Improvement in control of an inversely-responding process through proper tuning. New tuning
settings were entered during the morning of 7/9. Blue = motor amps; Green = setpoint.

Needless to say, the production manager was very happy with the control improvement and the plant is
now selling the ammonium sulfate crystals at a higher price because of the improvement in quality.

Level Control Loops


April 15, 2010

Level loops are very common in industry. In fact, around 20% of control loops in the process industry
(refining, petrochemical, power, paper & pulp, steel, etc.) are level loops, second in number only to flow
loops. Consequently, I have optimized a large number of level control loops over the last two decades.
Although the processes are different and have their own specific problems, a large percentage of the
level loops I looked at had one or both of two very common problems: the controller had too much
integral action, or the controller gain was too low.

Two Integrators

Integrating Process

A level-controlled process is also called an integrating process. This means that if there is an imbalance
between what goes in and what comes out, the level will continue to rise or fall. PID controllers also
have an integral term, and this is useful for getting a process all the way to its set point, something that
proportional control alone cannot do.
However, the combination of the two integrators (level and controllers integral term) in a level control
loop often causes problems. Problems range from overshooting the set point, to oscillations, to
downright instability. And the integral action of the controller is to blame. (I suppose we could blame the
process, but that problem is more difficult to solve than tuning.)
Careful with that Integral!
In addition to having one extra integrator in the loop, level control loops are normally quite slow to
respond. And a slow-responding process requires a long integral time (or low integral gain, depending
on your controllers integral unit.) If you have a long slow oscillation in a level control loop, compare the
integral time to the period of the oscillation. If integral time is shorter than the period of the oscillation,
that may be your problem.

Too Little Controller Gain

Many level-type processes have a very slow rate of response after a change in controller output. You
might change the output by 10% and then you have to wait 15 minutes to see the level move by a few
percent. To compensate for this slow rate of response, a high controller gain should be used.
Sometimes, a seemingly ridiculous controller gain, like 25 or more, is just what the level loop needs (but
do read on about loop response objectives below). Most people would consider that controller gain to be
just too high, and will dial down the gain to a more reasonable value of 3 or maybe 5. Doing this vastly
changes the ratio between proportional and integral in the level control loop, making it less stable. If
you lower the controller gain in a level loop, you must at the same time lengthen the integral time
proportionally to maintain the ratio between integral and proportional, and hence maintain loop stability.
This is not necessary on self-regulating loops like flow and temperature, but it is required on level
control loops.

Control Objectives
Some people apply the Ziegler-Nichols tuning rules to level control loops. This gives the loop a very fast
response, meaning a quick recovery after any deviation in the level. This may be the desirable
response, but the control objective may very likely be different.
Slow Response / Surge Tanks
In many cases a slow response is needed in a level control loop, like for level control in a surge tank.
For these applications you can apply level-averaging control or calculate the appropriate controller gain
by simply using the desired high and low level limits in the tank. For example if the level should always
remain between 20% and 80%, a Proportional Band (PB) of 60% (i.e. 80% 20%) is needed. So set
your controller gain to 100/PB (= 1.67 in this case) and turn off the integral action. Then, with the
controller in manual, bring the level and the setpoint to 50%, momentarily change the controller output to
50%, put the controller in auto, and voila you have a tuned and stable surge tank level loop. The
operators should be trained that the level will not always be at setpoint, but it will always remain
between the high and low limits.
Fast Response
A fast response may be needed, for example when controlling the level in a high-pressure gas separator
on an oil platform. Using the Ziegler-Nichols tuning rules will give a very fast response, but will result in
a loop with very little tolerance for changes in process characteristics, and a low tolerance for any
measurement errors you might have made. In short the Ziegler-Nichols tuning rules set a loop up with
insufficient robustness.
To improve the robustness on a level loop tuned with the Ziegler-Nichols tuning rules, you can use 0.5 of
the calculated controller gain, and 2 times the calculated integral time (or 0.5 of the integral gain
depending on which integral unit the controller uses). Follow this link for more detail on tuning level
loops for a fast response.

3. PID Controllers
o
Bumpless Transfer and Bumpless Tuning
o
Derivative Control
o
Gap Control
o
PID Controller Algorithms
o
PID Controllers Explained
o
Settings in the Controller were Closer than they Appeared

Unraveling Controller Algorithms

Bumpless Transfer and Bumpless Tuning


February 1, 2010

Bumpless Transfer
Most control practitioners have heard of bumpless transfer, a feature available in virtually all PID
controllers. It prevents a sudden jump (bump) in controller output when the controllers mode is switched
from manual to auto.
Bumpless transfer is done by the controller by executing its control algorithm to calculate a pseudo
controller output, comparing this to the current manual controller output, and subtracting the
difference from the integral term so that the calculated output matches the manual output. Bumpless
transfer can also be achieved by using a velocity algorithm and not the more intuitive positional
algorithm for calculating the controller output.
Both of these methods are pre-programmed in the controller code an area normally available only to
the controller manufacturer. Luckily, most controllers nowadays come standard with bumpless transfer.
Bumpless Tuning
Similar to bumpless transfer is the concept of bumpless tuning a term coined by Harold Wade. In this
case, without a bumpless tuning feature, the bump occurs due to changing controller gain or derivative
settings during controller tuning.
Again ignoring integral and derivative control actions for now, the controller output (CO) is simply:
CO(old) = Kc(old) * E
CO(new) = Kc(new) * E
If E is not exactly zero when the change in Kc is made, the controller output will jump. Similarly,
changing the controllers derivative setting (Td) can create a jump in controller output if the rate of
change of the error is not zero at the moment Td is changed.
It is because of the bump in controller output caused by changing tuning settings that it is good practice
to place a controller in manual momentarily while making tuning changes. Because most controllers
have bumpless transfer, it eliminates the bump when switching the controller back to auto after making
the tuning changes in manual mode. However, this becomes a problem if the tuning settings are
changed programmatically, as in the case of gain scheduling.
Bumpless tuning can be achieved without the need to place the controller in manual mode by
calculating how much the controller output will jump due to the new proportional and derivative settings,
and subtracting an equal quantity from the integral term, so that the sum of the three terms (the
controller output) remains unchanged. Similar to bumpless transfer, bumpless tuning can also be
achieved by using a velocity algorithm.
The bumpless tuning feature is pre-programmed in the controller device and cannot be added
afterward by the user. Far fewer controllers have this feature. For example, the modern C200 controllers
from Honeywell do not have Bumpless Tuning, while DeltaV controllers from Emerson do.

Derivative Control
May 3, 2010

When doing on-site services or training, I am often asked: When should one use the derivative control
mode of a PID controller? Although there is no black & white division between when to use it or not, I
have a few guidelines that should help your decision. But lets take a step back first and review the
derivative control mode and its role in a PID controller.

Figure 1. PID Controller

What is Derivative?
You can think of derivative control as predicting the error in future, based on the current slope of the
error. How far into the future? Thats what the derivative time (Td) is for. It is the prediction
horizon. (Derivative control actually uses extrapolation, not prediction. But hey, we all understand how
prediction works, so well just go with that.)
Once the derivative mode has predicted the future error, it adds an additional control action of Controller
Gain * Future Error.
For example, if the error changes at a rate of 2% per minute, and the derivative time Td = 3 minutes, the
predicted error is 6%. If the Controller Gain, Kc = 0.2, then the derivative control mode will add an
additional 0.2 * 6% = 1.2% to the controller output.
You dont Absolutely Need Derivative
The first point to consider when thinking about using derivative is that a PID control loop will work just
fine without the derivative control mode. In fact, the overwhelming majority of control loops in industry
use only the proportional and integral control modes. Proportional gives the control loop an immediate
response to an error, and the integral mode eliminates the error in the longer term. Hence no
derivative is needed.
Why Use Derivative
The derivative control mode gives a controller additional control action when the error changes
consistently. It also makes the loop more stable (up to a point) which allows using a higher controller
gain and a faster integral (shorter integral time or higher integral gain).
These have the effect of reducing the maximum deviation of process variable from set point if the
process receives and external disturbance. For a typical temperature control loop, you can expect a
20% reduction in the maximum deviation. Figure 2 shows how a loop with derivative (PID) control
recovers quicker from a disturbance with less deviation than a loop with P or PI control.

Figure 2. P versus PI versus PID control.

Obviously you dont want to use derivative to speed up a loop if the control objective is slow response,
like a surge tank, for example. But for loops where fast response is the objective, derivative could help.
But do read on for information on when not to use derivative.
Noisy PV
Using the derivative control mode is a bad idea when the process variable (PV) has a lot of noise on it.
Noise is small, random, rapid changes in the PV, and consequently rapid changes in the error.
Because the derivative mode extrapolates the current slope of the error, it is highly affected by noise
(Figure 3). You could try to filter the PV so you can use derivative, as long as your filter time constant is
shorter than 1/5 of your derivative time.

Figure 3. Effect of Noise on Derivative.

Process Dynamics
On dead-time dominant processes, PID control does not always work better than PI control (it depends
on which tuning method you use). If the time constant (tau) is equal to or longer than the dead time (td),
like in Figure 4, PID control easily outperforms PI control.

Figure 4. Process Dynamics.

Temperature and Level Loops


Temperature control loops normally have smooth measurements and long time constants. The process
variable of a temperature loop tends to move in the same direction for a long time, so its slope can be
used for predicting future error. So temperature loops are ideal candidates for using derivative control
if needed. Level measurements can be very noisy on boiling liquids or gas separation processes.
However, if the level measurement is smooth, level control loops also lend themselves very well to using
derivative control (except for surge tanks and averaging level control where you dont need the speed).
Flow Control Loops
Flow control loops tend to have noisy PVs (depending on the flow measurement technology used). They
also tend to have short time constants. And they normally act quite fast already, so speed is not an
issue. These factors all make flow control loops poor candidates for using derivative control.
Pressure Control Loops
Pressure control loops come in two flavors: liquid and gas. Liquid pressure behaves very much like flow
loops, so derivative should not be used. Gas pressure loops behave more like temperature loops (some
even behave like level loops / integrating processes), making them good candidates for using derivative
control.
Final Words
Derivative control adds another dimension of complexity to control loops. It does have its benefits, but
only in special cases. If a loop does not absolutely need derivative control, dont bother with it. However,
if you have a lag-dominant loop with a smooth process variable that needs every bit of speed it can get,
go for the derivative.

To learn more about controllers and tuning, contact OptiControls to for on-site process control training.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 3. PID Controllers |

Tags: Controller Tuning, Derivative Control

3 Responses to Derivative Control

Garcier LaCamppiello:
June 14, 2012 at 4:17 pm
I used this as a refresher-review and found it very helpful. You have done a fine job at
describing the derivative control mode and I recommend it to all readers. I am an
Instrumentation Technician formerly with Alyeska Pipeline Service Co., Alaska, retired but still
functional.
Thank you,

abdul wahab:
February 7, 2013 at 1:27 pm
nice, i am very happy to learn this

Sivasankar:
April 17, 2013 at 1:11 pm
Really usefulsince im an inst engineer, having long time doubt about usage of derivative
controli searched lot n didnt convince any answersbut here i really got wat i
wanted..thanks a lot.

Gap Control
July 18, 2011

Some control loops have two seemingly conflicting objectives of keeping the process variable under
control, but also minimizing controller output movement. Although such a loop will have a set point, it is
more important to keep its process variable within predefined bounds than to keep it exactly at set point.
A typical application of gap control is averaging level control. (Averaging control is somewhat like surge
tank control, but the process variable is controlled around its set point. True surge tank level
control rarely controls around its set point.)
Controller manufacturers have designed a modification of the standard PID control algorithm for use on
processes with these conflicting objectives. This modification is called gap control and it works on the
principle of two user-definable control regions, one for each of the two control objective.
The first region is far from set point (outside the gap) and requires a strong control action to turn the
process around and bring it back to set point. The normal controller settings are used outside the gap.
The second region is close to set point (inside the gap) within which the controller detunes itself based
on a configurable gain multiplier M (between 0.0 and 1.0). The detuning helps to minimize controller
output movements.

Gap Control (click to enlarge)

Tuning a Gap Controller


To use gap control effectively, you should set the size of the gap around the set point according to the
typical variation of the process variable so that the process variable does not frequently and
unnecessarily venture outside the gap. Tune the controller for fast response outside the gap to provide
quick recovery from disturbances. Use a gain multiplier (M) of 0.5 or less to minimize the controller
output movement inside the gap.
For integrating processes divide the calculated integral time (Ti) by M (i.e. make the integral time longer
in proportion to decreasing the controller gain). This is a requirement only for integrating processes and
is done to ensure a stable integral term when the controller gain is reduced inside the gap. Do not use a
gain multiplier of zero when controlling integrating processes because this dead band will cause the
process variable to continuously cycle through the gap.
Gap control should not be used in control loops of which the objective is to keep the process variable as
close to set point as possible. Use regular PI or PID control for those loops.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 3. PID Controllers

2 Responses to Gap Control

JC:
June 6, 2014 at 2:39 pm
Jacques, you recommend using gain multiplier of 0.5 or less. Is there any chance it will
destablize the loop as PV crosses the gap limit? If so, is it higher risk of destabilizing the
control loop with small Kgap? I want to use gap controller on a reactor and the normal OP
range is between 20 and 25%. Currently, the feedforward control is doing a decent job
handling disturbances but I want to minimize the CO movement during normal operation. I set
gain multiplier to 0.7 and I do see improvement. I want to further reduce the CO movement
but am concerned sudden change of K will destablize the loop or cause windup. Thank you in
advance for your feedback.

Jacques:
June 8, 2014 at 10:49 am
JC: It depends a lot on the control systems implementation of gap control. See this blog
on bumpless tuning. If the control system causes a bump in controller output when the PV
crosses over the edge of the gap it can cause major stability problems.
Your reference to OP makes me believe you have a Honeywell DCS. Take a look for the
setting called Legacy Gap. When turned on, this setting changes the behavior to bumpless.

The setting is not always visible on the controller faceplate (Loop Tune tab) so you may have
to use Control Builder to access it.

PID Controller Algorithms


March 30, 2010

Controller manufacturers arrange the Proportional, Integral and Derivative modes into three different
controller algorithms or controller structures. These are called Interactive, Noninteractive, and Parallel
algorithms. Some controller manufacturers allow you to choose between different controller algorithms
as a configuration option in the controller software.

Interactive Algorithm

Interactive Controller Algorithm

The oldest controller algorithm is called the Series, Classical, Real or Interactive algorithm. The original
pneumatic and electronic controllers had this algorithm and it is still found it in many controllers
today. The Ziegler-Nichols PID tuning rules were developed for this controller algorithm.

Noninteractive Algorithm

Noninteractive Controller Algorithm

The Noninteractive algorithm is also called the Ideal, Standard or ISA algorithm. The Cohen-Coon and
Lambda PID tuning rules were designed for this algorithm.
Note: If no derivative is used (i.e. Td = 0), the interactive and noninteractive controller algorithms are
identical.

Parallel Algorithm

Parallel Controller Algorithm

Some academic textbooks discuss the parallel form of PID controller, but it is also used in some DCSs
and PLCs. This algorithm is simple to understand, but not intuitive to tune. The reason is that it has no
controller gain (affecting all three control modes), it has a proportional gain instead (affecting only the
proportional mode). Adjusting the proportional gain should be supplemented by adjusting the integral
and derivative settings at the same time. Try to not use this controller algorithm if possible (in some
DCSs it is an option, so select the alternative).

Significance of Different Algorithms


The biggest difference between the controller algorithms is that the Parallel controller has a true
Proportional Gain (Kp), while the other two algorithms have a Controller Gain (Kc). Controller Gain
affects all three modes (Proportional, Integral and Derivative) of the Series and Ideal controllers, while
Proportional Gain affects only the Proportional mode of a Parallel controller.
This difference has a major impact on the tuning of the controllers. All the popular tuning rules (ZieglerNichols, Cohen-Coon, Lambda, and others) assume the controller does not have a parallel structure
and therefore has a Controller Gain. To tune a Parallel controller using any of these rules, the
Integral time has to be divided and derivative time multiplied by the calculated Controller Gain.
The second difference between the controller algorithms is the interaction between the Integral and
Derivative modes of the Series (Interactive) controller. This, of course, is only of significance if the
Derivative mode is used. In most PID controller applications, Derivative mode is not used. Formulas
have been developed for converting tuning settings between Ideal and Series controller algorithms.

Units of Measure of Tuning Settings


Another very important difference between controllers lies in the units of measure of the tuning settings.
There are three differences.
1. Most controller types (e.g. Honeywell Experion, Emerson DeltaV, ABB Bailey) use Controller Gain,
while some (e.g. Foxboro I/A, Yokogawa CS3000) use Proportional Band (PB). The conversion between
the two is easy once you know which one is being used: PB = 100% / Kc.
2. Many controllers (e.g. Siemens APACS) use minutes as the unit for Integral and Derivative modes,
but some controllers (e.g. Emerson DeltaV) use seconds.
3. Some controllers (e.g. ABB Mod 300) use Time for their Integral unit, while others (e.g. Allen-Bradley
SLC500) use Repeats/Time. These are reciprocals of each other.
The first controller I ever tried to tune used Proportional Band, but at the time, I had never heard of this
concept. Needless to say, when I entered my calculated Kc of 1.2 into its PB setting, the loop became
wildly unstable. It did not take me long to realize that I should read up on PID controllers before trying to
tune one again.

Other Differences
Beyond the differences mentioned above, controllers also differ in the way the changes on controller
output is calculated (positional and velocity algorithms), in the way Proportional and Derivative modes
act on set point changes, in the way the Derivative mode is limited/filtered, as well as a interesting array
of other minor differences. These differences are normally subtle, and should not affect your tuning.

When tuning controllers, always find out what structure the controller has and what units it is using.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 3. PID Controllers

4 Responses to PID Controller Algorithms

Nitin Mehta:
October 10, 2012 at 12:09 am
Nice article, thank you Jacques!

Tejaswinee:
October 10, 2012 at 4:06 am
Excellent

Marco:
February 12, 2014 at 10:08 am
If I dont know which algorithm is implemented and the manufacturer of the controller wont tell
me., is there a simple way to identify it?

Jacques:
February 16, 2014 at 10:46 pm
Marco, start with a PI controller with PV = SP = CO = 0%, mode = manual. Use a PV range of
100%. Use a gain of 1 and an integral time of 1 minute. Change mode to auto and change the
SP to 10% while keeping the PV at 0%. The controller output should jump by 10% and then
ramp by another 10% in the 1 minute you have as integral time. If that worked, go to manual
and reset SP and CO back to 0. Change the controller gain to 2. Change the mode to auto
and the SP to 10%. The controller output will now jump to 20%, but then notice the ensuing
ramp over 1 minute. If the CO ramps by 10% you have a parallel algorithm, and you dont
need to read any further. If the CO ramps by 20% you have either an interactive or a
noninteractive algorithm. If you plan to use PI control, this is all you need to do because
interacting and noninteracting algorithms behave the same under PI control.
If you plan on using PID control, to find out if the algorithm is interacting or noninteracting, go
to manual and reset SP and CO to 0. Set the derivative time to 0.25 minutes (gain is still 2
and integral time is still 1 minute). Change the mode to auto and the SP to 10%. Ignore the
CO bump, look at the CO value after 1 minute. If the CO is at 45% you have an interacting
algorithm. If the CO is at 40% you have a noninteracting algorithm.

PID Controllers Explained


March 7, 2011

PID controllers are named after the Proportional, Integral and Derivative control modes they have. They
are used in most automatic process control applications in industry. PID controllers can be used to
regulate flow, temperature, pressure, level, and many other industrial process variables. This blog
reviews the design of PID controllers and explains the P, I and D control modes used in them.

Manual Control
Without automatic controllers, all regulation tasks will have to be done manually. For example: To keep
constant the temperature of water discharged from an industrial gas-fired heater, an operator will have
to watch a temperature gauge and adjust a fuel gas valve accordingly (Figure 1). If the water
temperature becomes too high for some reason, the operator has to close the gas valve a bit just
enough to bring the temperature back to the desired value. If the water becomes too cold, he has to
open the gas valve.

Figure 1. An operator doing manual control.

Feedback Control
The control task done by the operator is called feedback control, because the operator changes the
firing rate based on feedback that he gets from the process via the temperature gauge. Feedback
control can be done manually as described here, but it is commonly done automatically, as will be
explained in the next section.

Control Loop
The operator, valve, process, and temperature gauge forms a control loop. Any change the operator
makes to the gas valve affects the temperature which is fed back to the operator, thereby closing the
loop.

Automatic Control
To relieve our operator from the tedious task of manual control, we should automate the control loop.
This is done as follows:

Install an electronic temperature measurement device.

Automate the gas valve by adding an actuator (and perhaps a positioner) to it so that it can be
driven electronically.

Install a controller (in this case a PID controller), and connect it to the electronic temperature
measurement and the automated control valve.

A PID controller has a Set Point (SP) that the operator can set to the desired temperature. The
Controllers Output (CO) sets the position of the control valve. And the temperature measurement,
called the Process Variable (PV) gives the controller its much-needed feedback. The process variable
and controller output are commonly transmitted via 4 20mA signals, or via digital commands on a
Fieldbus.
When everything is up and running, the PID controller compares the process variable to its set point and
calculates the difference between the two signals, also called the Error (E).
Then, based on the Error and the PID controllers tuning constants, the controller calculates an
appropriate controller output that opens the control valve to the right position for keeping the
temperature at the set point. If the temperature should rise above its set point, the controller will reduce
the valve position and vice versa.

Figure 2. A PID controller doing automatic control.

PID Control
PID controllers have three control modes:

Proportional Control

Integral Control

Derivative Control

Each of the three modes reacts differently to the error. The amount of response produced by each
control mode is adjustable by changing the controllers tuning settings.

Proportional Control Mode


The proportional control mode is in most cases the main driving force in a controller. It changes the
controller output in proportion to the error (Figure 3). If the error gets bigger, the control action gets
bigger. This makes a lot of sense, since more control action is needed to correct large errors.
The adjustable setting for proportional control is called the Controller Gain (Kc). A higher controller gain
will increase the amount of proportional control action for a given error. If the controller gain is set too
high the control loop will begin oscillating and become unstable. If the controller gain is set too low, it will
not respond adequately to disturbances or set point changes.

Figure 3. Proportional control action.


Adjusting the controller gain setting actually influences the integral and derivative control modes too.
That is why this parameter is called controller gain and not proportional gain.

Proportional Band
While most controllers use controller gain (Kc) as the proportional setting, some controllers use
Proportional Band (PB), which is expressed in percent. Table 1 shows the relationship between Kc and
PB.

Controller Gain (Kc)

Table 1. Relationship between Kc and PB

Proportional-only Controller
Proportional controllers are simple to understand and easy to tune. The controller output is simply the
output of the proportional control mode, plus a bias. The bias is needed so that the controller can
maintain an output (say at 50%) while there is no error (set point = process variable).

Figure 4. A proportional-only controller algorithm.


The use of proportional control alone has a large drawback offset. Offset is a sustained error that
cannot be eliminated by proportional control alone. For example, lets consider controlling the water
level in the tank in Figure 5 with a proportional-only controller. As long as the flow out of the tank
remains constant, the level will remain at its set point.

Figure 5. Level control, with operator causing a disturbance.


But, if the operator should increase the flow out of the tank, the tank level will begin to decrease due to
the imbalance between inflow and outflow. While the tank level decreases, the error increases and our
proportional controller increases the controller output proportional to this error. Consequently, the valve
controlling the flow into the tank opens wider and more water flows into the tank.

As the level continues to decrease, the valve continues to open until it gets to a point where the inflow
again matches the outflow. At this point the tank level (and error) will remain constant. Because the error
remains constant our P-controller will keep its output constant and the control valve will hold its position.
The system now remains at balance, but the tank level remains below its set point. This residual
sustained error is called Offset.
Figure 6 shows the effect of a sudden decrease in fuel gas pressure to the process heater described
earlier, and the response of a p-only controller. The decrease in fuel-gas pressure reduces the firing rate
and the heater outlet temperature decreases. This creates and error to which the controller responds.
However, a new balance-point between control action and error is found and the temperature offset is
not eliminated by the proportional controller.

Figure 6. A proportional controllers response to a disturbance.


Under proportional-only control, the offset will remain until the operator manually changes the bias on
the controllers output to remove the offset. It is said that the operator manually Resets the controller.

Integral Control Mode


The need for manual reset as described above led to the development of automatic reset or the Integral
Control Mode, as we know it today. As long as there is an error present (process variable not at set
point), the integral control mode will continuously increment or decrement the controllers output to
reduce the error. Given enough time, integral action will drive the controller output far enough to reduce
the error to zero.
If the error is large, the integral mode will increment/decrement the controller output fast, if the error is
small, the changes will be slower. For a given error, the speed of the integral action is set by the
controllers integral time setting (TI). A large value of TI (long integral time) results in a slow integral
action, and a small value of TI (short integral time) results in a fast integral action (Figure 7). If the
integral time is set too long, the controller will be sluggish, if it is set too short, the control loop will
oscillate and become unstable. In the figure, TS is the control algorithms execution interval, sometimes
called sampling time or scan time.

Figure 7. Integral control action and an integral-only controllers equation.


Most controllers use integral time in minutes as the unit of measure for integral control, but some others
use integral time in seconds, integral gain in repeats per minute or repeats per second. Table 2
compares the different integral units of measure.

Integral Time

Minutes

0.05
0.1
0.2
0.5
1
2
5
10

20
Table 2. Units of the integral control mode.

Proportional + Integral Controller


Commonly called the PI controller, its controller output is made up of the sum of the proportional and
integral control actions (Figure 8).

Figure 8. The PI controller algorithm.


Figure 9 shows how the integral mode continues to increment the controllers output to bring the heater
outlet temperature back to its set point. Compared to Figure 6, it is clear how Integral control eliminates
offset.

Figure 9. A PI controllers response to a disturbance.

Derivative Control Mode


The third control mode in a PID controller is derivative. Derivative control is rarely used in controlling
processes, but it is used often in motion control. For process control, it is not absolutely required, is very
sensitive to measurement noise and it makes trial-and-error tuning more difficult.

See http://blog.opticontrols.com/archives/153 for more detail. Nevertheless, using the derivative control
mode of a controller can make a control loop respond a little faster than with PI control alone.
The derivative control mode produces an output based on the rate of change of the error (Figure 10).
Derivative mode is sometimes called Rate. The derivative mode produces more control action if the
error changes at a faster rate. If there is no change in the error, the derivative action is zero. The
derivative mode has an adjustable setting called Derivative Time (TD). The larger the derivative time
setting, the more derivative action is produced. A derivative time setting of zero effectively turns off this
mode. If the derivative time is set too long, oscillations will occur and the control loop will run unstable.
Again TS is the controllers execution interval.

Figure 10. Derivative control action.


Two units of measure are used for the derivative setting of a controller: minutes and seconds.

Proportional + Integral + Derivative Controller


Commonly called the PID controller, its controller output is made up of the sum of the proportional,
integral, and derivative control actions (Figure 11). There are other configurations too.
See http://blog.opticontrols.com/archives/124 for a description.

Figure 11. The Standard (Noninteractive) PID controller algorithm.


PID control provides more control action sooner than what is possible with P or PI control. This reduces
the effect of a disturbance, and shortens the time it takes for the level to return to its set point.

Figure 12. A PID controllers response to a disturbance.


Figure 13 compares the recovery under P, PI, and PID control of the process heater outlet temperature
(PV) after a sudden change in fuel gas pressure as described above.

Figure 13. The result of a P, PI, and PID controllers response to a disturbance.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 3. PID Controllers

13 Responses to PID Controllers Explained

mohd redzuwan:
December 13, 2011 at 9:14 pm
hye, Im looking forward to know more about PID controller and try to find a suitable books or
reference regarding this topic since i teach control engineering subject. thanks

Jacques:
December 14, 2011 at 1:36 pm
Mohd, you could take a look at this book: Process Control for Practitioners
Jacques

sn Maiti:
June 19, 2012 at 12:03 am
while I appreciate for the physical example given for the proportional control action, could you
please explain the interal action by physical explanation? From mathematical, the reset action
is not comprehensible.

Jacques:
June 19, 2012 at 9:03 pm
Maiti: Here is a practical example of everyday integral action. You perform a type of integral
action when you change your shower waters temperature. Lets say the water is way too cold.
You might know from experience you have to turn the hot water tap at least 1/4 turn. This is
equivalent to proportional action. Then the temperature might be closer, but still not exactly on
target. So you slowly turn the tap some more. The closer you get to the desired temperature,
the slower you turn the tap. You continue turning the tap slower and slower until the
temperature is exactly on target and then you stop turning the tap. This secondary action is a
pseudo integral action. The larger the error, the faster the corrective action is done, and it
continues until the error is zero.

Ahniel:
July 22, 2012 at 12:10 pm
What is manual bias in a controller?

Jacques:
July 23, 2012 at 12:48 pm
Lets say a proportional-only controller with a setpoint of 70 kg/hr is in manual, and the output is
zero. The operator increases the output manually to 40% to bring the actual flow to 70 kg/hr.
Then he puts the controller in auto. Now since CO = Kc x Error for a proportional controller, and
the error is zero, one would expect the output to drop to 0%. However, the controller output
remmains at 40%. That is the manual bias that biases the output by 40%. So actually CO = Kc
x Error + Bias.

Nilesh:
May 15, 2013 at 9:51 am
It was quite nice going through itthanks.. these examples helped me a lot in understanding it
physically

Darus:
November 26, 2013 at 10:03 am
I have a very simple question. I have a secondary controller in a cascade loop that is
controlling 3 control valves maintaining the secondarys Set-point so as to maintain the
primarys Set-point. Before any tuning should be attempted, wouldnt the first thing to do would
be to try to linearized each valve independently as much as is possible with the
controller/positioner you have. As in Honeywell analog outputs only having 4 points for nonlinear correction. Some positioners also have more advanced non-linear correction capabilities,
but comes with costs but, that might not even be a consideration. I have been in the PID
control tuning business one way or another for over 20 years. I am looking for conformation on
what I consider the first and foremost property necessary to have a 1 chance in 1000000 times
to control the primary(control Set-point)in all ranges. Thank You!

Jacques:
November 26, 2013 at 12:37 pm
Darus Yes, if you are looking for the best possible performance from the flow control loops,
you should do tests or use historical data to determine if the valves have a linear installed flow
characteristic. Considering the normal operating range of each valve, if the maximum gain is
more than 2 3 times the minimum gain, you should consider linearizing the flow
characteristic. A four-point characterizer should be sufficient for this except for valves that
have an S-shaped characteristic, such as butterfly valves (and dampers). You can also use a
function generator or F(x) block to do the linearization. I dont like putting characterizers in
positioners, because when a positioner is replaced the chance is good that the characterizer
will not be configured in the new one. You should also implement gain scheduling in the
primary loop if your system sometimes uses fewer than three valves for control.

aditya:
February 15, 2014 at 10:01 pm
Sir,
Can you please explain the proportional controllers offset error with another example.
Because when inflow matches the outflow, level shall remain constant, but our measured
variable i.e. level, will still be different fom setpoint isnt it?so controller will have to open up
control valve more and more so that level reaches setpoint again till error is zero.so where is
the offset coming into picture?

Jacques:
February 22, 2014 at 1:49 pm

A mechanical example of a P-only controller is a spring on which a weight hangs. If you add
more weight, the spring stretches out a bit more to uphold the additional weight. The difference
in spring length is offset.

Leo:
June 7, 2014 at 6:14 am
I dont understand Table 1.
0,1 is 10% and so on. In my mind the left or right column should be inverted.

Jacques:
June 8, 2014 at 10:53 am
Leo: PB is not simply the Kc expressed in percent, PB = 100% / Kc, or differently, Kc = 100% /
PB. Table 1 is correct.

Leave a Reply

Settings in the Controller were Closer than they


Appeared
May 17, 2012

Before I do step-testing to analyze and tune a control loop, I always take a look at the current tuning
settings in the controller.

The controllers gain setting gives me some indication of the sensitivity of the process, e.g. if
the controller gain is 0.1 the process could be very sensitive to controller output changes.

The controllers integral time gives me an idea of the speed of the process dynamics, i.e. a
short integral time usually means fast process dynamics and vice versa.

The derivative time (if used) can reveal if the last person tuning the loop lacked understanding
of the tuning process, e.g. if the derivative time is set to more than half the integral time, or less
than one-eighth of it.

Earlier this month I optimized control loops on an oil platform. A few of the loops were oscillating. One of
the oscillating loops, a gas pressure control loop on a separator, had a controller gain of 16! I facetiously
told the control engineer: Well, theres your problem! A value of 16 did seem like an abnormally large
controller gain, but I know there are many exceptions from normal in process control.
A closer look revealed the reason for the high controller gain. Even though the set point was set to the
normal operating pressure of 200 PSI, the pressure transmitter was calibrated to measure between 0
and 4000 PSI. So the operating pressure was at only 5% of the measurement range! A more
appropriate measurement range would have been 0 400 PSI, since the maximum design pressure for
the vessel was 380 PSI. In this case, the calibration range was ten times larger than it should have
been. Considering that the measurement span was ten times over-ranged, the controller gain had to be
ten times larger than normal to compensate. This means the effective gain of the controller was only
1.6, which is a reasonable value, especially for gas pressure control. In other words, the high controller
gain was not responsible for causing the control loop instability. It turned out that the control loop was
oscillating because of control valve stiction.
Based on these findings I recommended a replacement / recalibration of the pressure transmitter and
the subsequent reranging of the signals in the DCS. After doing this, the controller gain must be set to
1.6. I also recommended that the sticky control valve be repaired or replaced to fix the oscillations.
The high controller gain cancelled out by the large measurement span reminded me of the warning on a
passenger-side rear-view mirror: Objects in the mirror are closer than they appear.

Objects in the mirror are closer than they appear.

Learn more about controller settings from the book Process Control for Practitioners.
Try it out for yourself using the OptiControls Loop Simulator.
Stay Tuned!
Jacques Smuts
Founder and Principal Consultant
OptiControls Inc.

Unraveling Controller Algorithms


October 27, 2012

Someone recently asked me which controller algorithm the Emerson Provox PID controller uses. She
pointed me to the PID equations given in the Provox manual (Figure 1).

Figure 1. PID Algorithm given in the Emerson Provox Manual (click to enlarge)

The author of this documentation has obviously given little thought to the information sought by the end
user:
1.

What algorithm does the controller use (interactive, noninteractive, parallel)?

2.

Is the integral setting in units of time (min/sec) or gain (repeats/min, repeats/sec)?

3.

Do the proportional and derivative terms act on error or PV?

The answers to these questions are critical for tuning the controller and understanding its response to
setpoint changes. But the information given in the manual does not give one of these pieces of
information in a straightforward manner. (It may seem obvious from Figure 1 that the integral setting is in
units of time, but this is wrong.)
To help decipher controller algorithms as documented by many manufacturers, I thought it may be
useful to summarize the building blocks of a PID controller and their Laplace representations (Figure 2).

Figure 2. Controller algorithm building blocks used in manufacturer documentation. (Click to enlarge.)

TI = Integral time
TD = Derivative time
TF = Filter time constant
a = Derivative filter ratio. Also called derivative gain limiter, or rate action limiter. Usually set between 0.1
and 0.125.
Now back to the Provox PID algorithm
Rearranging the equation at the top of Figure 1 to reflect the blocks given in Figure 2, and substituting
IVP (implied valve position) with CO (controller output) we obtain the equation in Figure 3.

Figure 3. Rearranged Provox PID equation. (Note that 1/Ti should actually be Ki - see text).

The equation in Figure 3 indicates that the Provox uses the interactive PID control algorithm because
the derivative term is multiplied by the proportional and integral terms, not added to them. The
rearranged equation in Figure 3 also tells me that the Provox uses a lead-lag block, which provides
derivative action with the numerator (lead) and derivative filtering with the denominator (lag). This
answers the first question.
The equation and text in Figure 1 would have led me to believe that the controller uses integral time,
but WATCH OUT, the part of Emersons manual I pasted in Figure 1 turned out to be wrong in this
aspect! Many pages later, the Provox manual says that the integral tuning constant (called RESET), is
actually a gain expressed in repeats per minute. In their own peculiar way the second question has
been answered.
Finally, doing some extensive rearrangement of the math at the bottom of Figure 1, I concluded that the
derivative term acts on PV and not Error. Admittedly, the latter is difficult to do with the interactive
controller algorithm. Luckily the manual actually tells the reader the set point term of the PID algorithm

has been isolated so that rate action occurs only on changes to the PV. If you know what you are
looking for this line says it. Tediously, the last question was answered.
To summarize, the Emerson Provox PID controller uses:
1.

The interactive control algorithm

2.

Integral gain in repeats/minute

3.

Derivative on PV and proportional on error

As a consultant working on many controller types, I have been compiling a database of controller
algorithms and tuning units over that past 20+ years. I learned that you often have to rearrange the
manufacturers equations into a familiar format to see what algorithm they are using (Emerson is not the
only culprit here and not the worst I have seen). And you should question the accuracy of the
information given in controller equations try to find text to back it up.
Stay tuned,
Jacques Smuts author of Process Control for Practitioners.
Posted in 3. PID Controllers

5 Responses to Unraveling Controller Algorithms

Allan Zadiraka:
November 2, 2012 at 7:56 pm
Jacques
Very interesting article, especially when read with your linked article on PID Controller
Algorithms. I disagree with your statement in that article that the parallel arrangement is nonintuitive and should be avoided.
I spent most of my career with the Bailey 721/820 and Network 90 function codes 18 & 19
controllers which include both controller (Kc) and proportional gains (Kp). My training was to
keep the controller gain at 1.0 since a parallel arrangement is much easier to tune. Changing
Kc means you have to change Ki & Kd as well.
To me, controller gain is a legacy of the mechanical arm adjustment in pneumatic controllers
which needs to be eradicated.
I have no problem with using PID control, just with the cookie cutter PID controllers that we
are stuck with.

Said:
November 16, 2012 at 5:10 pm
We are working on a sliimar quadrotor project and we are newbeeies in the control algorithm.
We have a working IMU now with precise Euler angle outputs. Our struggle is with the PID
control. We just implemented a very simple PID control and tried tuning it to balance just pitch
axis while keeping the other axis fixed on a stand. However, we could not get to a good
enough result especially in high throttle. I would really appreciate if you could give us some
feedback on the PID control you guys implemented.ThanksGuinness

Jacques:
November 25, 2012 at 9:27 pm
We have to use the PID control algorithm supplied by the control system vendor in their
hardware. That is why we have to understand what algorithm they provide so that we can
tune it properly. If you tell me more about your algorithm and tuning methods I might be able
to give you some direction.
- Jacques

Anu:
November 18, 2013 at 11:05 pm
Interesting article. I have got a driver for a dc motor. I know my system and motor
specifications. The driver software allows the user to decide the gains of the controller. But
am required to tune the PI controller. How will I know what algorithm they have coded in the
DSP of the driver? Without knowing the algorithm, how am I supposed to tune the controller?
Please do give your inputs and suggestions.

Jacques:
November 20, 2013 at 7:33 am
Anu I suggest you contact the supplier to obtain the algorithm and the units of measure for
the tuning parameters. You could potentially also determine the algorithm by looking how the
controller reacts to a setpoint change while holding the process variable constant when using
different values for the tuning parameters, e.g. 0 and 1; 1 and 0; 1 and 1.

4. Controller Tuning
o
Cohen-Coon Tuning Rules
o
Comments on the Ziegler-Nichols tuning method
o
Detuning Control Loops
o
Is Lambda a Bad Tuning Rule?
o
Lambda Tuning Rules
o
Level Controller Tuning
o
Minimum IAE Tuning Rules
o
Quarter Amplitude Damping
o
Surge Tank Level Control
o
Tank Level Tuning Complications
o
Tuning Rule for Dead-Time Dominant Processes
o
Typical Controller Settings
o
When to Use which Tuning Rule
o
Ziegler-Nichols Closed-Loop Tuning Method
o
Ziegler-Nichols Open-Loop Tuning Rules

Cohen-Coon Tuning Rules


March 24, 2011

Based on the number of Google searches in 2010, the Cohen-Coon tuning rules are second in
popularity only to the Ziegler-Nichols tuning rules. Cohen and Coon published their tuning method in
1953, eleven years after Ziegler and Nichols published theirs.

More Flexible than Ziegler-Nichols


The Cohen-Coon tuning rules are suited to a wider variety of processes than the Ziegler-Nichols tuning
rules. The Ziegler-Nichols rules work well only on processes where the dead time is less than half the
length of the time constant.

The Cohen-Coon tuning rules work well on processes where the dead time is less than two times the
length of the time constant (and you can stretch this even further if required).
Cohen-Coon provides one of the few sets of tuning rules that has rules for PD controllers should you
ever need this.

Quarter-Amplitude Damping
Like the Ziegler-Nichols tuning rules, the Cohen-Coon rules aim for a quarter-amplitude
damping response. Although quarter-amplitude damping-type of tuning provides very fast disturbance
rejection, it tends to be very oscillatory and frequently interacts with similarly-tuned loops. Quarteramplitude damping-type tuning also leaves the loop vulnerable to going unstable if the process gain or
dead time doubles in value. However, the easy fix for both problems is to reduce the controller gain by
half. E.g. if the rule recommends using a controller gain of 1.8, use only 0.9. This will prevent the loop
from oscillating around its set point as described above, and will provide an acceptable stability margin.

Target PID Controller Algorithm


There are three types of PID controller algorithms: Interactive, Noninteractive, and Parallel. The CohenCoon tuning rules were designed for controllers with the noninteractive controller algorithm. If you are
not using the derivative control mode (i.e. using P, PI, of PD control), the rules will also work for the
interactive algorithm. However, if you are using derivative (i.e. PID control) on an interactive controller,
or if your controller has a parallel algorithm, you should convert the calculated tuning settings to work on
your controller.

Noninteractive Controller Structure

A Note on Integral Time


The original Cohen-Coon paper expressed the tuning constant for the integral control mode in terms of
reset rate (or integral gain) in repeats per minute. Virtually all the modern texts on process control use
integral time, and this article follows that trend. Therefore, the tuning rules below use integral time. If
your controller uses integral gain or reset rate, youll have to invert the calculated integral time (use
1/Ti).
Also, if your controllers integral time unit is in minutes, you must make your measurements of dead time
and time constant in minutes. Likewise if your controller uses seconds, make your measurements in
seconds.

When to use the Cohen-Coon Tuning Rules


The Cohen-Coon tuning rules are suitable for use on self-regulating processes if the control objective is
having a fast response, but I recommend you divide the calculated controller gain by two, as described
above.
If the control objective is to have a very stable, robust control loop that absorbs disturbances, rather use
the Lambda tuning rules.

Tuning Procedure
Assuming the control loop is linear and the final control element is in good working order, you can
continue with tuning the controller. The Cohen-Coon tuning rules use three process characteristics:
process gain, dead time, and time constant. These are determined by doing a step test and analyzing
the results.

Step Test for Tuning (click to enlarge)

1.

Place the controller in manual and wait for the process to settle out.

2.

Make a step change of a few percent in the controller output (CO) and wait for the process
variable (PV) to settle out at a new value. The size of this step should be large enough that the
process variable moves well clear of the process noise/disturbance level. A total movement of
five times the noise/disturbances on the process variable should be sufficient.

3.

Convert the total change obtained in PV to a percentage of the span of the measuring device.

4.

Calculate the process gain (gp) as follows:


o

5.

gp = change in PV [in %] / change in CO [in %]

Find the maximum slope on the PV response curve. This will be at the inflection point(where
the PV stops curving upward and begins curving downward). Draw a line tangential to the PV
response curve through the point of inflection. Extend this line to intersect with the original level
of the PV (before the step change in CO). Take note of the time value at this intersection.

6.

Measure the dead time (td) as follows:


o

td = time difference between the change in CO and the intersection of the tangential
line and the original PV level.

7.

Calculate the value of the PV at 63% of its total change. On the PV reaction curve, find the
time value at which the PV reaches this level.

8.

Measure the time constant (Greek symbol tau) as follows:


o

tau = time difference between intersection at the end of dead time, and the PV
reaching 63% of its total change.

9.

Convert your measurements of dead time and time constant to the same time-units your
controllers integral mode uses. E.g. if your controllers integral time is in minutes, use minutes
for this measurement.

10.

Do two or three more step tests and calculate process gain, dead time, and time constant for
each test to obtain a good average of the process characteristics. If you get vastly different
numbers every time, do even more step tests until you have a few step tests that produce similar
values. Use the average of those values.

11.

Calculate new tuning settings using the Cohen-Coon tuning rules below. Note that these rules
produce a quarter-amplitude damping response. See the next step.

o
The Cohen-Coon Tuning Rules click to enlarge

12.

Divide the calculated controller gain by two to reduce oscillations and improve loop stability.

13.

Compare the newly calculated controller settings with the ones in the controller, and ensure
that any large differences in numbers are expected and justifiable.

14.

Make note of the previous controller settings, the new settings, and the date and time of
change.

15.

Implement and test the new controller settings. Ensure the response is in line with the overall
control objective of the loop.

16.

Leave the previous controller settings with the operator in case he/she wants to revert back to
them and cannot find you to do it. If the new settings dont work, you have probably missed
something in one or more of the previous steps.

17.

Monitor the controllers performance periodically for a few days after tuning to verify improved
operation under different process conditions.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

*G.H. Cohen and G.A. Coon, Theoretical Consideration of Retarded Control, Trans. ASME, 75, pp. 827834, 1953
Posted in 4. Controller Tuning

2 Responses to Cohen-Coon Tuning Rules

ija:
December 22, 2011 at 8:33 am
why level of water cannot use Cohen-Coons to calculate the gain and intergral time? instead
use Zieglar-Nicholss method to calculate gain and intgral time.

Jacques:
December 23, 2011 at 10:34 pm
Ija,
The Cohen-Coon tuning method requires a measurement of the process time constant, while
the way we normally model integrating processes (like liquid level) this measurement is not
available.
The original Ziegler-Nichols method does not require a measurement of the process time
constant and therefore lends itself very well to tuning level controllers for a fast response. See
this poset for details: http://blog.opticontrols.com/archives/697

- Jacques

Comments on the Ziegler-Nichols tuning method


January 11, 2010

The overwhelming majority of control practitioners are familiar with the Ziegler-Nichols tuning methods.
If not from personal experience, then at least from reading or hearing about the tuning rules they
developed. Although their tuning rules are widely published and referenced (do a Google search on
Ziegler Nichols), I have met only a few control practitioners who have actually read their original
paper Optimum Settings for Automatic Controllers published in Transactions of the A.S.M.E., November
1942, and pondered the fundamental premises of their work, and applicability to processes in general.
I think the Ziegler-Nichols tuning rules were indeed ground-breaking and still are elegant, however, there
are a few potential problems to consider when applying theses rules.
Issue #1 Quarter-amplitude damping
Ziegler and Nichols chose quarter-amplitude damping to be optimum control loop response. Although
this result lies neatly in the middle between a completely dead controller and an unstable control loop,
you should realize that quarter-amplitude damping, by design, causes the process to overshoot its set
point and to oscillate around it a few times before eventually settling down. For many processes,
overshoot cannot be tolerated and oscillations are a bad thing. For these sensitive processes, the
Ziegler-Nichols tuning rules cannot be used, unless the controller is detuned from the original settings.
Issue #2 Low robustness (Close to unstable)
Ziegler and Nichols describe in detail how they designed their tests and how they arrived at the
optimum tuning settings. Quite simply, they increased controller gain until they reached the point of
instability and then backed the controller gain down by a factor of 0.5 (e.g. if a controller gain of 0.8
makes a loop border-line unstable, they recommend using a gain of 0.4). This gave them exactly the
quarter-amplitude damping they were aiming for, but it makes the margin to loop instability very small.
An equal-percentage control valves gain can easily increase by a factor of 10 as the valve position
changes between closed and fully open. The gain of a feed heater temperature loop can quadruple if
the process flow through the heater goes from maximum down to 25%. In both of these examples (and
numerous others), a control loop tuned with the Ziegler-Nichols rules can quickly go unstable.
Issue #3 Poor response on dead-time-dominant processes
Under Ziegler-Nichols tuning, the controllers integral setting is set proportional to the process dead
time: the longer the dead time, the slower the integral will be. This is acceptable where dead times are
short in relation to the process time constant, but not so for dead-time-dominant processes. For the
latter, the integral action becomes too slow, and the loops ability to recover from a process upset is
diminished.
Issue #4 Very sensitive to underestimating the dead time
The Ziegler-Nichols open-loop response test (called the reaction curve method in their paper) requires
graphically measuring the process dead time and time constant on a plot of the process response after
changing the controllers output. If the dead time is significantly smaller than the time constant, its
measurement becomes exceptionally difficult especially due to the effect of process noise and
disturbances on the process response curve. Because the Ziegler-Nichols tuning method relies on an
accurate measurement of dead time for the calculation of controller gain, integral, and derivative
settings, an incorrectly measured dead time will result in a very poorly tuned controller.
Issue #5 Very sensitive to control valve problems
The ultimate cycling, or ultimate sensitivity, method of Ziegler-Nichols tuning can produce grossly
incorrect controller settings if the control loop has a faulty control valve. This tuning method requires a
very linear response between the controller output and process response to obtain the ultimate gain and
ultimate period. Control valve dead band and stiction will affect both the ultimate gain and ultimate
period, and you will end up with a poorly tuned control loop.
General considerations
In addition to the issues listed above, there are a few additional items to keep in mind when using the
Ziegler-Nichols tuning rules:

The time units of your process measurements must be the same as your controllers integral
and derivative time units. For example, if your controllers integral and derivative time units are in
minutes, make your process measurements in minutes or convert them to minutes.

The rules were developed for the series (interactive) controller algorithm. Controller settings
calculated using the Ziegler-Nichols tuning rules must be converted when applied to PID
controllers with the ideal (non-interactive) algorithm and PID or PI controllers with the parallel
algorithm.

The original rules had an integral unit (reset rate as they called it in their paper) of 1/minute,
while many controller types also have their integral unit in minutes (or of course, 1/second or
seconds). Also, the rules calculate controller gain (called sensitivityin their paper) and not
proportional band, which is used by a few controller types. Be sure to do the proper conversions
between calculated controller settings in the units used by Ziegler-Nichols to the units used by
your controller.

In Ziegler and Nichols defense


In Ziegler and Nichols defense, Ill be quick to point out that many of these issues and most
considerations apply to several other tuning rules too. It is important that you, as a controls practitioner,
dont just blindly apply a tuning rule, but that you understand its objectives and how controllers and
tuning rules work.

One Response to Comments on the Ziegler-Nichols


tuning method

Glen:
November 14, 2010 at 8:34 am
The problem is that Ziegler Nichols is based on a faulty premise. One normally does not
adjust the controller gains to alter time domain response. Time domain response should be
adjusted via a prefilter. One adjusts the controller to obtain desired frequency domain
characteristics.
The other fundamental flaw involved here is that the PID controller is a low order controller.
Such a low order controller unnecessarily restricts the ability to realize desired frequency
domain characteristics.

Detuning Control Loops


October 3, 2012

Many tuning rules are designed to produce a super-fast control loop response. These rules
include Ziegler-Nichols, Cohen-Coon, and Minimum IAE tuning rules. When tuned with one of these
fast-response tuning rules, the controller responds so aggressively to disturbances and setpoint
changes that the process variable overshoots its setpoint and oscillates around the setpoint a few times
before it finally settles out. You might have heard of the Quarter-Amplitude-Damping response that
Ziegler and Nichols and others aimed for (Figure 1).

Figure 1. Simulated response of a temperature control loop tuned for Quarter-Amplitude Damping.

Perhaps even more concerning than the oscillatory response, is the fact that these aggressive tuning
methods push a control loop very close to instability. This could become a huge problem because a
small change in the process gain or dynamics can cause a control loop tuned in this way to become
completely unstable. We say the control loop is not robust.
Detuning a Controller
Since oscillatory response and low robustness can both be detrimental to control loop performance, it is
therefore good practice to tune controllers in a way that will produce a more stable control loop. Starting
from tuning settings calculated with an aggressive tuning rule, we can detune the controller to obtain a
more stable response and increased robustness. In this sense, detuning is a good thing and it simply
means that we are slightly backing away from the aggressive, oscillatory, and semi-unstable tuning, to
obtain a loop that is less oscillatory and more tolerant to changes in process characteristics.
On controllers with interactive and noninteractive algorithms, detuning is done by simply reducing the
controller gain (Kc). Detuning the controller by a factor of two simply means dividing the controller gain
by two. For simplicity, I call this detuning factor the stability margin(SM):
Kc to use = (calculated Kc) / SM.
On controllers with the parallel algorithm, the proportional, integral, and derivative gains need to be
lowered in equal proportions.
Detuning a control loop by too little leaves it oscillatory and close to instability. Detuning a control loop
by too much makes it very stable and robust, but slow to respond to disturbances and setpoint changes.
Detuning the controller by a factor of 2 to 3 (using a stability margin of 2 to 3) is normally sufficient for
eliminating oscillations and improving loop robustness.

Figure 2. Simulated response of a control loop tuned for quarter-amplitude-damping response (SM = 1),
compared to detuning it by factors of 2 and 4 (SM = 2 and SM = 4).

To summarize, if you use one of the aggressive tuning rules to calculate controller settings, you should
detune the controller by dividing the calculated controller gain by a stability margin of 2 or more to
reduce overshoot, eliminate oscillations, and improve control loop robustness.

Stay tuned!
Jacques Smuts
Author of Process Control for Practitioners, available from
Amazon.com:http://www.amazon.com/gp/product/0983843813

Is Lambda a Bad Tuning Rule?


July 6, 2012

Control Global recently published an article titled: The Case Against Lambda Tuning. In this article,
controls guru Greg Shinskey makes the argument that Lambda tuning fails miserably for a loop
responding to a disturbance. He bases his argument on two loop performance measures:

Maximum deviation from set point

Integral of error

He then goes on to describe optimum tuning as a balance between the two performance objectives and
provides tuning guidelines for achieving this. A very nice explanation indeed.
Although the article is technically correct, I dont fully agree with its premise: that all controllers should
be tuned to respond fast and reject disturbances.
Its like arguing against buying a minivan because it does not perform as well as a Corvette. However,
minivans are less expensive and can seat several kids. So if economy and seat count are the objectives
for your large familys car, the minivan makes a fine choice. Similarly, if robustness and stability are your
control loop performance objectives, Lambda tuning makes a good choice.
I am not saying Shinskey is wrong for promoting fast loop response, but keep in mind that fast response
is only one of the possible performance objectives. Stability and robustness are also valid control
objectives, and Lambda tuning caters to these needs. It is a fine tuning rule if you want to stabilize
control loops in complex, interactive plants. For example, the paper industry makes widespread use of
Lambda tuning to keep their highly-interactive paper machines stable. It is also a unique tuning rule that

allows you to specify the loops speed of response to a set point change, i.e. the closed loop time
constant.
You may wonder about the downside of using Lambda tuning. Well, the penalty one pays for increased
control loop stability is a slower rejection of disturbances. So, to Gregs point, Lambda tuning is not well
suited for fast disturbance rejection on processes with long time constants. But it sure is a useful tuning
rule for improving loop stability or dialing in a specific speed of response.
Once you understand the pros and cons of using Lambda tuning, you can decide if it meets your needs
for any particular control loop.
Learn more about Lambda and other tuning rules from the book Process Control for Practitioners.
Then try it out for yourself using the OptiControls Loop Simulator.
Stay Tuned!
Jacques Smuts
Founder and Principal Consultant
OptiControls Inc.
Posted in 4. Controller Tuning

One Response to Is Lambda a Bad Tuning Rule?

Vctor D. Parra:
July 10, 2012 at 3:47 pm
I agree with you.
As you exposed in your minivan vs Corvette example, everything in engineering is a question
of use and balance. Our job as engineer is always to find the solution that match with our
requirements (about functionality, safety, isolation, endurance, etc) but respecting our
constraints (price, weight, dimensions, legislation, etc)
A loop tuned very agressively intending to achieve the fastest response can be very harmful
in another part of your plant downstream (think on a very agressive primary controller in a
cascade loop, it will fight with the secondary controller). Your performance parameter for a
loop depends on what objective you want to achieve (speed, stability, slow movement, less
valve displacement, etc).

Lambda Tuning Rules


November 22, 2010

The Lambda tuning rules, sometimes also called Internal Model Control (IMC)* tuning, offer a robust
alternative to tuning rules aiming for speed, like Ziegler-Nichols, Cohen-Coon, etc. Although the Lambda
and IMC rules are derived differently, both produce the same rules for a PI controller on a self-regulating
process.
While the Ziegler-Nichols and Cohen-Coon tuning rules aim for quarter-amplitude damping, the Lambda
tuning rules aim for a first-order lag plus dead time response to a set point change. The Lambda tuning
rules offer the following advantages:

The process variable will not overshoot its set point after a disturbance or set point change.

The Lambda tuning rules are much less sensitive to any errors made when determining the
process dead time through step tests. This problem is common with lag-dominant processes,
because it is easy to under- or over-estimate the relatively short process dead time. ZieglerNichols and Cohen-Coon tuning rules can give really bad results when the dead time is
measured incorrectly.

The tuning is very robust, meaning that the control loop will remain stable even if the process
characteristics change dramatically from the ones used for tuning.

A Lambda-tuned control loop absorbs a disturbance better, and passes less of it on to the rest
of the process. This is a very attractive characteristic for using Lambda tuning in highly
interactive processes. Control loops on paper-making machines are commonly tuned using the
Lambda tuning rules to prevent the entire machine from oscillating due to process interactions
and feedback control.

The user can specify the desired response time (actually the closed loop time constant) for the
control loop. This provides one tuning factor that can be used to speed up and slow down the
loop response.

Unfortunately, the Lambda tuning rules have a drawback too. They set the controllers integral time
equal to the process time constant. If a process has a very long time constant, the controller will
consequently have a very long integral time. Long integral times make recovery from disturbances very
slow.
It is up to you, the controls practitioner, to decide if the benefits of Lambda tuning outweigh the one
drawback. This decision must take into account the purpose of the loop in the process, the control
performance objective, the typical size of process disturbances, and the impact of deviations from set
point.
Below are the Lambda tuning rules for a PI controller. Although Lambda / IMC tuning rules have also
been derived for PID controllers, there is little point in using derivative control in a Lambda-tuned
controller. Derivative control should be used if a fast loop response is required, and should therefore be
used in conjunction with a fast tuning rule (like Cohen Coon). Lambda tuning is not appropriate for
obtaining a fast loop response. If speed is the objective, use another tuning rule.
To apply the Lambda tuning rules for a self-regulating process, follow the steps below. Also, please read
the paragraph in red text following the tuning equations.

1. Do a step-test and determine the process characteristics


a) Place the controller in manual and wait for the process to settle out.
b) Make a step change in the controller output (CO) of a few percent and wait for the process variable
(PV) to settle out. The size of this step should be large enough that the PV moves well clear of the
process noise/disturbance level. A total movement of five times the noise/disturbances on the process
variable should be sufficient.
c) Calculate the process characteristics as follows:
Process Gain (gp)
Convert the total change in PV to a percentage of the measurement span.
gp = change in PV [in %] / change in CO [in %]
Dead Time (td)
Note: Make this measurement in the same time-units your controllers integral mode uses. E.g. if your
controllers integral time is in minutes, use minutes for this measurement.
Find the maximum slope of the PV response curve. This will be at the point of inflection. Draw a line
tangential through the PV response curve at this point. Extend this line to intersect with the original level
of the PV before the step in CO. Take note of the time value at this intersection.
td = time difference between the change in CO and the intersection of the tangential line and the original
PV level
Time Constant (tau)
Calculate the value of the PV at 63% of its total change. On the PV reaction curve, find the time value at
which the PV reaches this level
tau = time difference between intersection at the end of dead time, and the PV reaching 63% of its total
change
Note: Make this measurement in the same time-units your controllers integral mode uses. E.g. if your
controllers integral time is in minutes, use minutes for this measurement.

Step Test for Lambda Tuning

d) Repeat steps b) and c) two more times to obtain good average values for the process characteristics.
If you get vastly different numbers every time, do even more step tests until you have a few step tests
that produced similar values. Use the average of those values.

2. Pick a desired closed loop time constant (taucl) for the control loop
A large value for taucl will result in a slow control loop, and a small taucl value will result in a faster
control loop. Generally, the value for taucl should be set between one and three times the value of tau.
Use taucl = 3 x tau to obtain a very stable control loop. If you set taucl to be shorter than tau, the
advantages of Lambda tuning listed above soon disappear.

3. Calculate PID controller settings using the equations below


Controller Gain (Kc)
Kc = tau/(gp x (taucl + td))
Integral Time (Ti)
Ti = tau
Derivative Time (Td)
Td = zero.
Important Notes!

The tuning equations above are designed to work on controllers with interactive or
noninteractive algorithms, but not controllers with parallel (independent gains) algorithms.

The rules calculate controller gain (Kc) and not proportional band (PB). PB = 100/Kc.

The rules assume the controllers integral setting is integral time Ti (in minutes or seconds),
and not integral gain Ki (repeats per minute or repeats per second). Ki = 1/Ti.

Read this posting for more details.


If your controller is different from the above, simple parameter conversions will allow you to use the
Lambda rules.

Contact me to learn more, or to schedule an in-house training session on controller tuning techniques.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

*Rivera, D.E., M. Morari, and S. Skogestad, Internal Model Control 4. PID Controller Design, Industrial
Engineering and Chemical Process Design and Development, 25, p. 252, 1986
Posted in 4. Controller Tuning

22 Responses to Lambda Tuning Rules

Trevor:
October 5, 2012 at 4:19 pm
I have a negative relationship between my CO and my PV, in other words I raise my CO and
my PV decreases. Should I use a negative Gp (and get a resultant negative Kc) or should I
just use the absolute value?
Thanks!

Jacques:
October 5, 2012 at 4:32 pm
Trevor Very few controllers support a negative Kc. Check your controller documentation.
Most likely you will have to use the absolute value of the calculated gp and configure your
controller to be a direct-acting controller.

Tejaswinee:
October 11, 2012 at 8:27 am
Cohen-Coon and Ziegler-Nichols can be stated as first order-+time delay process model;
because we estimate same parameters like dead time, time constant and process gain. Isnt
it?

Jacques:
October 11, 2012 at 2:45 pm
Tejaswinee, yes the Ziegler-Nichols, Cohen-Coon, Lambda, Minimum IAE, and many other
tuning rules use the first-order + dead time process model for tuning control loops with selfregulating processes. Since the model is only an approximation of the real process, the
resultant tuning settings are only approximately correct, but in most cases good enough to
use without further adjustment, provided that the loop wasdetuned appropriately.

ajay:
December 13, 2012 at 6:46 am
Hey all,
Very nice article.
I Have a question anybody tried to implement this method using any software.. !!!
I am a student i am trying to implement a PID controller for controlling the flow rate of air. The
problem i am facing is I want to use automatic tuning of the controller but i am not sure which
method shall i use and then there are some question which can not figure out how to do it.
so i just wanted if somebody already implemented can help me with this.
thanks and regards
ajai

Jacques:
December 13, 2012 at 4:14 pm
Ajai, I suggest you search for relay auto tuning on Google. Most PID controllers with
integrated auto tuning use this technique in some form and it is reasonably easy to

implement. The most difficult part of implementation is handling abnormal data, and very fast,
very slow, noisy, and disturbed processes. But if this is a lab project you probably dont have
to worry about these factors.

Marco:
March 18, 2013 at 10:42 am
I did the response test with an Oxford Instrument ITC503 temperature controller. My problem
is the proportional band used by the controller is in Kelvin. How should I convert the gain?

Jacques:
March 18, 2013 at 11:59 pm
Marco,
Kc = controller gain.
PB = proportional band.
100%/Kc = PB[%] = PB[Kelvin] / (PV Range [Kelvin]) * 100%
So,
1/Kc = PB[Kelvin] / (PV Range)
Then:
PB[Kelvin] = (PV Range) / Kc
- Jacques

Marco:
March 21, 2013 at 5:22 am
One more question. Wenn performing the Step response test, should CO changes e.g. 1%,
2%, 3%, have the same time constant tau?

Jacques:
March 22, 2013 at 12:31 pm
In theory the time constant should be the same. In practice it normally varies a bit, sometimes
a lot. If the valve travels slower than the actual process responds, the apparent dead time and
time constant will get longer as the step sizes increase. Also, when making really small steps
such as 1% 2%, some control valves react differently from one step top the next because of
friction, stiction, and deadband. I prefer making 5% steps (sometimes larger) for tuning, but
the processes dont always allow it.

RobW:
April 18, 2013 at 8:25 pm
Great article. Very direct and straight forward.
I have a question regarding intergrating processes.
How are the lambda tuning rules applied if the process is intergrating?
Could you outline the tuning method for this thype of process also?
Regards,
RobW

Jacques:
April 18, 2013 at 11:28 pm

Rob: With IMC tuning for level controllers, you would use the following settings for a PI
controller:
Kc = 1/ri x (2 taucl + td) / (taucl + td)^2
Ti = 2 taucl + td
Where:ri is the integration rate of the process in dPV%/(dCO% x time). Time must be in the
same units as td, taucl, and your controllers Ti.

Greg:
November 12, 2013 at 11:28 am
What do you use for tau (for calculating taucl) when the process is integrating?
Thanks!
Greg
<

Jacques:
November 12, 2013 at 2:39 pm
Greg You can normally use td x 3. But be aware that some level loops have a negligibly
short dead time compared to their integration rate ri. Then the IMC and modified Ziegler
Nichols rules will give you a very large controller gain. If you want to use a lower gain, use a
larger taucl or use the level averaging tuning rule.

ravi:
February 26, 2014 at 4:15 am
Sir,
To calculate ri, Can we used only one slope (The slope produce after making a step change
in controller out put) and then what will be the procedure to calculate ri, Kc, Ti, in this case.
An integrating process responds with a steady ramp instead of reaching a stable value, so
how can we calculate Tau (63% of total PV change) and how can we calculate Taucl.

Jacques:
March 1, 2014 at 3:34 pm
1. Yes, but only if your initial slope was zero, i.e. PV1 = PV2 using the method described on
this page: http://blog.opticontrols.com/archives/697
2. For integrating loops, pick a Taucl as required by the process, but make sure it is several
times longer than dead time. If you dont know what the process requires, then decide if you
want to use the level as a surge tank, or you the level to remain steadyand use the
appropriate tuning methods.

umair:
April 10, 2014 at 9:48 am
Hi, I have a question. Can we use this tuning method on cascade control like Level and Flow
cascade.

umair:
April 10, 2014 at 9:54 am

i use Yokogawa controller US1000 and limit of its proportional gain is 999. should i multiply
my KC by 10?

Jacques:
April 11, 2014 at 9:39 pm
Umair: Yes you can if you require a relatively slow response and a system that will remain
stable under most adverse and interactive conditions. When needed, I have used it on
cascade loops with great success.

Jacques:
April 11, 2014 at 9:40 pm
No. Why do you want to do that?

Daniel O.:
June 9, 2014 at 3:22 pm
When performing a bump test on a liquid flow controller I get about a 20% overshoot of the
PV before it settles to a steady state PV on bumps up and down. When calculating the gain of
the process should i be using the maximum/minimum PV or the steady state value that occurs
5-10 seconds after the peak. Also for calculating the integral time which one would PV would
you apply.
Link to the graph below(the orange is the output change of 2% on a different scale not shown)
http://imgur.com/nYvGFT1

Jacques:
June 9, 2014 at 9:04 pm
Daniel: A simple flow loop should not overshoot after a controller output step change in
manual. I would question the valve positioner, it seems to be overshooting. See this article
on valve problems.
However, if this is the response you have to deal with, the safest is to use the max change in
PV to calculate the process gain. For Lambda tunings integral time, the safest would also be
to calculate the time constant from 63% of the max change in PV.

Level Controller Tuning


December 16, 2011

Level control loops are common in industrial processes, but tuning level controllers can be challenging.
Many level loops oscillate, sometimes causing large parts of their adjacent processes to oscillate with
them. This article describes how to tune level controllers.

Figure 1: A level control loop.

An important thing we need to know about level loops is that liquid level in a vessel is an integrating
process, which responds differently from a self-regulating process. Therefore it has a different process
model that requires a different set of tuning rules. See my article onlevel control loops for some general
guidance.
Level controller tuning really is not all that difficult if you follow a few basic steps. There are always a few
outliers, but in general I like tuning level loops and find them reasonably easy to tune. If the level
controller output cascades to a flow controller (more info here), you have to tune the flow control loop
first. Ill assume you have done that already and are now ready to tune the level loop.
You should tune any controller based on the process dynamic response. Obtaining a model for the
dynamic response of a tanks level is easy:

Make sure as far as possible that the uncontrolled flow into/out of the vessel is as constant as
possible.

Place the level controller in manual control mode.

Wait for a steady slope in the level. If the level is volatile, wait long enough to be able to
confidently draw a straight line though the general slope of the level.

Make a step change in the controller output. Try to make the step change 5% to 10% in size, if
the process can tolerate it.

Wait for the level to change its slope and settle into a new direction. If the level is volatile, wait
long enough to be able to confidently draw a straight line though the general slope of the level.

Restore the level to an acceptable operating point and place the controller back in auto.

Now determine the process model:

Draw a line (Slope 1) through the initial slope, and extend it to the right (Figure 2).

Draw a line (Slope 2) through the final slope, and extend it to the left to intersect Slope 1.

Measure the time between the beginning of the change in controller output and the intersection
between Slope 1 and Slope 2. This is the process dead time (td), the first parameter you require
for tuning the controller.
Note: Express your dead time measurement in the same time-base your controller uses for its
integral time setting, i.e. minutes or seconds.

Pick any two points (PV1 and PV2) on Slope 1, located conveniently far from each other to
make accurate measurements.

Pick any two points (PV3 and PV4) on Slope 2, located conveniently far from each other to
make accurate measurements.

Calculate the difference in the two slopes as follows:


DS = (PV4 PV3)/T2 (PV2 PV1)/T1
Note: Express your T1 and T2 measurements in the same time-base your controller uses for its
integral time setting, i.e. minutes or seconds.

If your PV is not ranged 0 100 %, convert DS to a percentage of the range as follows:


DS% = 100 x DS / (PV range max PV range min)
Calculate the process integration rate (ri) which is the second and final parameter you need for
tuning the controller:
ri = DS% / dCO

Figure 2: Measurements for tuning a level loop.

Now that you have the dead time (td) and the process integration rate (ri), you can tune the controller. If
the control objective is a nice and fast response to quickly recover from disturbances, you can use a
modification of the Ziegler-Nichols (Z/N) tuning rules. The modification involves a slight detuning of the
controller because the original Z/N tuning rules result in a very aggressive loop response and low
tolerance for any change in operating conditions. I call the amount of detuning the stability margin,
denoted by SM. You should set SM to a value of 2.0 or larger. The larger you make SM, the slower the
loop will respond. In this way you can use SM as a fine-tuning factor.
Note:

The tuning rules below assume your controllers proportional setting is in gain Kc, not
Proportional Band, PB. If not: PB = 100 / Kc.

The tuning rules below also assume your controllers integral setting is in units of time Ti (i.e
minutes or seconds), not repeats per time Ki. If not: Ki = 1 / Ti.

The tuning rules below also assume you have a controller with an interacting algorithm
(although they work fairly well on noninteracting algorithms too), but not a parallel algorithm. For
controllers with the parallel algorithm, you need to divide Ti by Kc, and multiply Td by Kc, to
obtain their integral and derivative settings, respectively.

See my article on PID controller algorithms for more details.

To calculate tuning constants for a PI controller:


Kc = 0.9 / (SM x ri x td)
Ti = 3.33 x SM x td
Td = 0

And for a PID controller:


Kc = 1.2 / (SM x ri x td)
Ti = 2 x SM x td
Td = td / 2

Important Note:
Some level controllers should not respond fast, e.g. when controlling the level of a surge tank. Surge
tanks need a different set of tuning rules to ensure you make maximum use of the surge capacity, while
not exceeding the upper and lower level limits. Follow this link for tuning surge tank level controllers.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 4. Controller Tuning |

Tags: level control, tuning, Ziegler-Nichols

5 Responses to Level Controller Tuning

Andrew:
December 30, 2011 at 4:33 pm
Used this method this week to tune a level loop that has not run in automatic since installation
(4+ years). Operators have been controlling level manually. Problem was that the level filled
at a faster rate than it emptied. By taking into account both the filling and emptying rates in
this formula, the loop was successfully tuned and has been running in automatic all week.
The operators are extremely happy.
Thanks for the great blog and I look forward to reading your book.
Andrew

Ted Mortenson:
November 6, 2013 at 11:37 am
Maybe Im missing something here.
You state Make sure as far as possible that the uncontrolled flow into/out of the vessel is as
constant as possible. Place the level controller in manual control mode. Wait for a steady
slope in the level.
If flow in equals flow out, wont level remain constant? How will you ever see a steady state
slope in level?
Love your blog BTW.

Jacques:
November 6, 2013 at 8:02 pm
Ted, although it would make the calculations easier, this tuning method does not require that
inflow = outflow. In practice, it is very difficult to get the level to remain absolutely constant.
The level normally slopes up or down over time when the controller is in manual. As long as
the inflow remains constant, and the outflow remains constant, the slope will be constant.
That is why the procedure asks for a steady slope and not a constant level.

belami:
May 25, 2014 at 10:43 pm
How can we make level control valve (or all level control loop ) responding faster to level
change. especialy when the level rises and we dont want to over flow.
there is any improvement to add to level control loop to have that.
thanks

Jacques:
June 5, 2014 at 12:04 pm
Belami it is mostly the controller gain that determines how fast the valve responds to an
increase in level. Increasing the controller gain will make the valve respond faster. However,
using too much gain will cause the loop to become unstable. Using derivative control mode
can speed up the response a little bit too. Use the tuning methods on this page. If the
response is too slow, consider using feedforward control.

Minimum IAE Tuning Rules


September 13, 2012

I came across the Minimum IAE and other error-integral tuning rules very early in my career, but until
recently I did not have the original paper describing the development of these rules. A few weeks ago I
contacted Dr. C.L. Smith to get a copy of the paper he coauthored in 1967: Tuning Controllers with
Error-Integral Criteria (Reference at end of this article). The error-integral tuning rules described in this
paper minimize the area that develops over time if a process variable deviates from its setpoint, shown
as shaded areas in Figure 1.

Figure 1. Shaded areas indicate the integral of absolute error after a disturbance to a process.

Explain, please!
You may wonder exactly what you are looking at in Figure 1, so let me explain. Imagine we have a
running pump for pumping chemicals into a reactor, and a second pump on standby (Figure 2). If the
operator starts the second pump, the flow rate (process variable) will increase. As a result, the flow
controller will close the control valve a little to get the flow back to setpoint. The control action is not
shown in Figure 1, but it is so strong that it over-corrects, causing the flow rate to undershoot the
setpoint and oscillate a few times before finally settling out. The shaded areas between the process
variable and setpoint are the integral of the error.

Figure 2. The total flow rate will be disturbed when pump 2 starts up.

Tuning Objectives
So, lets get back to the technical paper. The paper describes tuning rules for minimizing several errorrelated integrals:
1. Integral of the error squared (ISE)
2. Integral of the absolute error (IAE)
3. Integral of the absolute error multiplied by time (ITAE)
The authors recognized that a controllers integral and derivative times should be based not only on the
length of the process time constant (or dead time as in the case of Ziegler & Nichols), but also on the
ratio of dead time to time constant (td / tau).
Optimized for Disturbance Response
The tuning rules were developed for optimizing a control loops disturbance response. Tuning for a
setpoint change will require different controller settings. However, Smith and Murrill did develop tuning
rules for minimizing IAE and ITAE on setpoint changes, but these are outside the scope of this article.
Target Process

The authors also stated that the rules were developed for a 1st-order plus dead-time process. One
shortfall is that their tuning rules were designed only for processes with time-constants equal to or
longer than dead times (tau >= td). This is not a major restriction since most processes do fall in this
category. But it excludes these tuning rules from being used on dead-time dominant processes.
Target Controller and Tuning Rules
The authors developed tuning rules for P, PI and PID controllers for a non-interactingcontroller algorithm
with controller gain, integral time, and derivative time. Since P-only control is hardly ever used, I list only
their PI and PID tuning rules in Table 1 below.
The process characteristics are denoted in Table 1 by:
- Process gain = gp
- Dead time = td
- Time Constant = Greek letter tau

Table 1. Minimum ISE, IAE, and ITAE Tuning Rules. Click to enlarge.

Performance Assessment
So, how well do these tuning rules perform? I tested them on simulations of flow and temperature
control loops. The PI control rules left the temperature loop with a very oscillatory response, but the PID
tuning rules worked a bit better. As expected, the PID rules did not work well at all on the flow loop. Their
Minimum ITAE tuning rules seemed to work the best in my opinion because they had the fewest
oscillations (Figure 3).

Figure 3. Minimum ITAE tuning on a simulated temperature control loop responding to a disturbance.

Low Stability Margin


My biggest concern is that all of the tuning rules pushed the control loop very close to instability. To
analyze stability, we can calculate a loops stability margin. The stability margin tells us how much the
process gain can increase before a loop will become unstable with its current tuning settings. The
stability margin on the temperature loop under Minimum ITAEs PI control was only 0.7. The PID loop
was marginally better at 1.1. Their other tuning rules fared even worse.
The control characteristics of industrial processes can change substantially based on valve position,
process throughput, head pressure, pH, etc. Using these tuning rules on real processes will very likely
cause stability problems because of their small stability margins. In all fairness this would be similar to
using the unmodified Ziegler-Nichols, Cohen-Coon, and many other high-performance tuning rules.
The solution would be to detune the controller (use a lower controller gain) to increase the stability
margin and tolerate these changes. I normally use a stability margin of 2 to 3 on loops I tune, sometimes
even more. But this would contradict the authors original tuning objective of super-fast response. This is
yet another example of the inevitable tradeoff between a loops speed of response and stability.

Reference to Minimum IAE Paper


Tuning Controllers with Error-Integral Criteria, A.M. Lopez, J.A. Miller, C.L. Smith, and P.W. Murrill,
Instrumentation Technology, November 1976, pp. 57 62.
More Info
Read more about tuning methods and loop stability in Process Control for Practitioners.
Stay tuned!
Jacques Smuts
OptiControls Inc.
Posted in 4. Controller Tuning

3 Responses to Minimum IAE Tuning Rules

Ron Compton:
February 17, 2013 at 2:32 pm
Is loop tuning software recommended rather then manually calculating the tuning
parameters?

Jacques:
February 20, 2013 at 8:13 pm
Ron, I definitely recommend using a good software program for tuning. It makes the model
identification, tuning, and analysis so much easier. However, a tuning tool is no replacement
for understanding your process, PID controllers, and the tuning process.
Note that there are more than a dozen commercially available tuning software packages.
Make sure you try out a few and pick one that is easy to use and meets your criteria.
- Jacques

Peter Nachtwey:
April 3, 2013 at 12:04 pm
I agree that minimum IAE, ITAE or SSE will result in barely stable systems. One gets more
robust results if you add extra terms to the cost function that is being minimized. For instance
one can add a cost for the magnitude of the control output or changes in the control output.
However, if one does this then it isnt simply a minimum IAE, ITAE or SSE any more and is
closer to linear quadratic control.
My other problem with minimum IAE etc, is that it minimizes what case? Minimizing the the
response in a step from 0 to 1 will be different from minimizing the step between 0 and 2.
Also, minimum IAE etc methods dont take into account that the control output is limited.
I calculated the minimum IAE, ITAE and SSE coefficients using Mathcad and Scilab. Scilab is
free.

Quarter Amplitude Damping


October 4, 2013

Quarter-amplitude damping is likely the best-known tuning objective, but its a poor choice for process
stability. Also called quarter amplitude decay or QAD, many tuning rules, including the famous ZieglerNichols and Cohen-Coon tuning rules, were designed for this objective. The idea behind quarteramplitude damping is to eliminate any error between the setpoint and process variable very fast. In
fact, the controller responds so fast that the process variable actually overshoots its setpoint and
oscillates a few times before it finally comes to rest (Figure 1). The deviation from setpoint gets smaller
with each successive cycle at a ratio of 4:1. In Figure 1, the ratio of B/A = 1/4.

Figure 1. A quarter-amplitude-damping response after a process disturbance.

When developing their tuning rules, Ziegler and Nichols chose quarter-amplitude damping to be
optimum control loop response. Although QAD performance lies in the middle between a completely
dead controller and an unstable control loop, you should realize that quarter-amplitude damping, by
design, causes the process to overshoot its set point and to oscillate around it a few times before
eventually settling down. Practitioners with solid experience in controller tuning will all tell you that
quarter-amplitude-damping is a very poor choice for tuning industrial control loops.Problems with
Quarter-Amplitude Damping
Although the quarter-amplitude damping tuning objective provides very fast rejection of disturbances, it
creates three problems:
1.

It makes the loop very oscillatory, often causing interactions with similarly tuned loops. If control
loops in a highly interactive process, such as a paper machine, power plant boiler, or
hydrodealkylation process, are tuned for quarter-amplitude damping, oscillations affecting the
entire process often occur.

2.

It causes a loop to overshoot its setpoint when recovering from a process disturbance and after
a setpoint change. Many processes cannot tolerate overshoot.

3.

QAD-tuned loops are not very stable and have low robustness. They can very easily become
completely unstable if the process characteristics change. For example, such a loop will become
unstable if its process gain doubles, which can happen very easily in industrial processes.

Solution
An easy way to minimize all three problems is to reduce the controller gain (detune the controller). The
minimum reduction I recommend is to use the calculated Kc divided by two (or more if necessary). For
example, if a quarter-amplitude-damping tuning rule suggests using a controller gain of 0.9, then use
0.45 instead. This will greatly reduce oscillations and overshoot in the control loop, and it will increase
the loops robustness by a factor of two. (Please note that if your controller uses a parallel algorithm, you
have to reduce Kp, Ki, and Kd to achieve the equivalent effect).
Stay tuned!
Jacques Smuts
Principal Consultant at OptiControls, and author of Process Control for Practitioners.

Posted in 1. General, 4. Controller Tuning

One Response to Quarter Amplitude Damping

Don Parker:
October 8, 2013 at 5:11 pm
Jacques,
I have worked with boiler/turbine controls for many years and could not agree more. So many

of the processes are interactive that they must be tuned wthout oscillation, generally with
maximum overshoot of about 5%.
Of course there is also the problem of over-active actuators, which can cause premature
aging, wear, linkage hysteresis, etc.
I have found Lambda tuning to be a very successful method for many power plant control
loops.

Surge Tank Level Control


November 23, 2012

A surge tank is placed between two processing units to absorb flow rate fluctuations coming from the
upstream process and keep the flow rate to the downstream process more constant. To do this, the tank
level has to go up and down. Consequently, the level controller should not try to hold the level as close
as possible to its set point; the controller should simply keep the surge tanks level between its upper
and lower limits, and do this with the least possible amount of change to its output.

Figure 1. Surge tank level control loop.

Although there are other methods of controlling surge tank level, the level-averaging method [1] is
preferred by most operators and process engineers. This method minimizes control valve movement
during disturbances, keeps the level between its limits, and brings the level back to setpoint in the long
term (Figure 2). Another method of surge tank control does not bring the level back to setpoint but
potentially provides more surge capacity [2]; Ill write about that method another day.
Surge Tank Level Controller Tuning
To tune the controller for level-averaging control, you need to know the following three things:
1. The residence time of the vessel (tres)
The residence time is the time it would take for the surge tank to drain from 100% level to 0% level if
there is no flow into the tank and the outlet valve is 100% open. You can calculate this as the volume of
liquid contained in the vessel between 0% and 100% of the span of its level measurement, divided by
the maximum flow rate with the outlet valve wide open: tres = V/Qmax. Use the same engineering unit for
volume in V and Qmax. If you dont know the volume and/or maximum flow rate, you can estimate the
residence time as the inverse of the process integration rate, tres = 1/ri. You can determine ri through step
testing. Be sure to express tres in the same time-base as your controllers integral time (minutes versus
seconds).

2. The largest expected change in flow rate (fmax).


This should be expressed as a percentage of maximum valve capacity. You can review historical trends
of the loop and find the largest change the controller output has made (under automatic control) to
control the level.
3. The maximum tolerable deviation from setpoint (Lmax).
This should be expressed as a percentage of the span of the level measurement.

Once you have all of these, calculate tuning settings for the controller with the equations below.
For a controller with an interactive or noninteractive algorithm:
KC = 0.74 fmax / Lmax
TI = 4 tres / KC
TD = 0
KC is controller gain.
If your controller uses proportional band, PB = 100/KC.
TI is integral time in the same units as tres.
If your interactive or noninteractive controller uses integral gain, KI = 1/TI.
TD is the derivative time.
For a controller with a parallel algorithm:
KP = 0.74 fmax / Lmax
KI = KP2 / (4 tres)
KD = 0
KP is proportional gain and KI is integral gain using the same time-base as tres.
If your parallel controller uses integral time: TI = 1 / KI
KD is the derivative time.

Figure 2. Response of a surge tank level control loop to a disturbance in inlet flow rate.

Faster tuning is also possible. The following equations will produce tuning settings to bring the level
back towards the setpoint much quicker. The level will slightly overshoot the setpoint as a result of the
faster response (Figure 3).
For a controller with an interactive or noninteractive algorithm:

KC = 0.5 fmax / Lmax


TI = 0.74 tres / KC
TD = 0
For a controller with a parallel algorithm:
KP = 0.5 fmax / Lmax
KI = KP2 / (0.74 tres)
KD = 0
The same parameter descriptions and conversions given previously, apply to the faster tuning equations
too.

Figure 3. A faster response to the same disturbance in inflow.

Figure 3. A faster response to a disturbance in inflow.


With these tuning rules, you should be able to get your surge tanks under control, and have them
respond appropriately to surges in inflow. Let me know if you need help.

Stay tuned.
Jacques Smuts author of Process Control for Practitioners.

References:
[1] H.L. Wade, Basic and Advanced Regulatory Control: System Design and Application, 2ndEdition, ISA,
2004.
[2] F.G. Shinskey, Process Control Systems: Application, Design, and Tuning, 4th Edition, McGraw-Hill,
1996.

Tank Level Tuning Complications


November 4, 2013

Level control loops are strange creatures. This strangeness can make them difficult to tune. On
average, level control loops are tuned the worst of all process types. Although I have seen poorly tuned
loops of all types, poorly tuned level controllers typically have tuning settings that are the furthest from
optimal. Most level processes are very robust in nature, allowing them to function surprisingly well with
suboptimal tuning.

But it does not have to be this way. If controller tuning is based on the dynamic response of a process,
most level control loops are actually easy to tune and provide very robust control. However, as you
probably know, most control loops are tuned intuitively using trial and error. More often than not, this
approach results in poor control loop performance.
Case Study
A few weeks ago, I helped an engineer at a power plant with the tuning of a demineralized (demin)
water storage tank. It was a large tank about 40 feet (12 m) high and 20 feet (6 m) in diameter. Water
was pumped from the demin water production plant into the tank, and this flow rate was manipulated
with a control valve (Figure 1). Under normal operating conditions the unit consumesd demin water at
an almost constant rate (most of which was discharged through the continuous boiler blowdown).

Figure 1. Demineralized water storage tank level control.

To do the tuning correctly, the engineer executed a few step tests (Figure 2) and we analyzed the data.
We calculated the process integration rate (or process gain) to be 0.0045 / minute. This means if the
level is at steady state and the controller output is changed manually by X percent, it will take 1/0.0045
minutes (3.7 hours) for the level to change by the same percentage. The dead time was measured to be
roughly 2.5 minutes.

Figure 2. The two step tests used for tuning.

Once we had this information on the dynamic properties of the process, we used the modified ZieglerNichols tuning rules for Integrating Processes and calculated new tuning settings for this control loop.
We used a stability margin of 2.5 and obtained the following tuning settings:
Controller Gain (Kc) = 32
Integral Time (Ti) = 20 minutes.
The high controller gain was a concern. Although the level was quite smooth during our step tests, a
historical trend of level revealed some jittering was present at times. And since a 1% jitter in level would
cause the controller output to jitter by 32% (Kc x delta PV), we decided to use a lower controller
gain since tight control was not a requirement. We felt that Kc = 10 would be a good compromise
between control performance and jitter tolerance.
Tuning Complications
Many level loops have small integration rates (or process gains). Integration rate (ri) is inversely
proportional to the vessels residence time. Typically, the larger the tank, the smaller the integration rate.
The process with the smallest integration rate that I personally worked with was a city water reservoir,

which had a residence time of 48 hours (ri = 0.000347 / minute). For good control, a very low integration
rate theoretically requires a very high controller gain, sometimes in excess of 100. Practically we cannot
use controller gains of this magnitude because of the severe control action that would result from noise
and setpoint changes. (Note that one can also overcome severe control action by using a noise filter
and either the P&D-on-error control algorithm or a setpoint filter).
This mandatory reduction of the controller gain brings me to the reason why most level loops have
grossly suboptimal tuning settings. For integrating control loops (such as tank level), when you reduce
the controller gain you have to increase the integral time, otherwise the loop can become very
oscillatory.
Unenlightened tuners do not know of this requirement and end up using disproportionately short integral
times on level loops, resulting in very oscillatory behavior. When they try to stabilize the loop by further
reducing the controller gain, the situation deteriorates even more.
Example
For example, lets look at how Billy, our unenlightened but fictitious tuner, might have tuned the tank
level controller. Assuming he did step tests, he then used the original Ziegler-Nichols tuning rules (I did
mention he is unenlightened) for calculating the controller settings. He obtained the following controller
settings: Controller Gain (Kc) = 80 and Integral Time (Ti) = 8.3 minutes. He realized that the controller
gain of 80 was too high, and reduced it to 10. But he left the integral time at 8.3 minutes, as calculated.
Then he tested the new tuning settings and noticed overshoot and oscillations in level. Too much gain,
right? So he set the Kc value to 5 and retested the performance. The loop still oscillated with the
adjusted tuning settings, but he realized that this tuning effort was taking too much of his time, so he left
the tuning settings as they were and moved on to other work. Billys tuning results are shown in Figure
3.

Figure 3. The result of using decreased controller gains on a level loop, while leaving the integral time at the
originally calculated value.

How its Done


Now back to our own tuning efforts on the demin water storage tank. When the engineer and I reduced
the controller gain from 32 to 10, we simultaneously increased the integral time from 20 to 64 minutes,
which we calculated using the equation below.
Equation for calculating a new integral time when reducing the controller gain in a level loop:
Ti(new) = Ti(old) x Kc(old) / Kc(new)
Figure 4 compares the level loops response to a 5% change in outflow using the initial and refined
controller settings. The control loop is significantly more stable compared to the alternatives shown in
Figure 3.

Figure 4. Stable level control loop response obtained from increasing integral time while decreasing controller
gain.

As I said at the beginning, level controller tuning does not have to be difficult. Do step-tests to
understand the process dynamics, use proven tuning rules to calculate controller settings, and
remember to adjust the integral time inversely to any subsequent change you make in controller gain.
Stay tuned!

Jacques Smuts
Founder and Principal Consultant, OptiControls
Author of Process Control for Practitioners

Posted in 4. Controller Tuning, 8. Case Studies

2 Responses to Tank Level Tuning Complications

Nhan:
November 5, 2013 at 1:01 am
Dear Jaques,
Referred to the response curve, I calculate out the ri= 0.045/min, how come to 0.0045 as
said, please advise, thanks

Jacques:
November 5, 2013 at 6:50 am
Nhan, lets consider the first step change, dCO = 25% in size.
Before the step has any effect, the level decreased by 1% over 32 minutes. S1 = -0.031%/min
After the step change, the level increased by 1.1% over 13 minutes. S2 = 0.085%/min
ri = (S2 S1) / dCO = (0.085 + 0.031) / 25
= 0.0046 (which I rounded down to 0.0045 for convenience).

Tuning Rule for Dead-Time Dominant Processes


December 15, 2010

Processes with lags or time constants (tau) longer than their dead times (td) are reasonably easy to
tune. Most tuning rules work well for processes where tau > 2 td (lag dominant). The opposite is not
true. Many tuning rules work very poorly when td > 2 tau (dead-time dominant).
Lag Dominant
When a process has a time constant that is much longer than the dead time, problems like overshoot
and having to use high controller gains begin to appear. However, loops with long time constants still act
in an intuitive way if we add more control action we can make the process respond faster, like
stepping down harder on the accelerator will get our car to the desired speed quicker.
Dead-Time Dominant
On the other side of the spectrum, when a process dead time is significantly longer than its time
constant, it behaves much less intuitively adding more control action does not make the process
respond faster. For example, if your shower water is a little cold, opening the hot water tap a lot more is
not going to get you to the right temperature any quicker, and it is going to have some serious sideeffects.
I once saw several operators struggle to manually control the outlet temperature of a three-pass kiln.
The kiln was a dead-time dominant process and its dead time was about 10 minutes long. The operators
would notice the temperature is below set point and increase the firing rate. When they see no effect,
they increase the firing rate more. And then some more, and more. Finally, when changes have made
their way through the dead time, the temperature overshoots its set point by a large margin. Then the
operators take the same actions and make the same mistakes in the opposite direction.
Needless to say, controller tuning also becomes difficult on dead-time dominant processes.
Tuning

Step response of a dead-time dominant process.

You will find that the Ziegler-Nichols tuning rules dont work well at all on a dead-time dominant process.
For example, the following process characteristics were measured from the step-response of a deadtime dominant process in the previous plot:
td = 0.276 minutes
tau = 0.013 minutes
gp = 0.89
Applying the Ziegler-Nichols tuning rules to this process gives the following controller settings: Kc =
0.05; Ti = 0.92 minutes. The result is an extremely sluggish control loop (see below).

Dead-time dominant loop tuned with the Ziegler-Nichols tuning rules.

Processes with time constants (tau) longer than their dead times (td) are reasonably easy to tune. Most
tuning rules work well for processes where tau > 2 td (lag dominant). The opposite is not true. Most
tuning rules work very poorly when td > 2 tau (dead-time dominant).
The Lambda tuning rules were designed for lag dominant processes and do not work all that well on
dead-time dominant processes either. The Cohen-Coon tuning rules work much better than the ZieglerNichols rules, but they too arent the best tuning rule when the dead time is five or ten times as long as
the time constant.
So what type of tuning rule will work well for controlling dead-time dominant processes? First, we need a
lag-dominant controller, to make up for the absence of lag in the process. But if we just crank up the
integral term, the loop will become unstable. So, second, we have to compensate by decreasing the
controller gain.
The Cohen-Coon PI tuning rules will work reasonably well up to td = 2 tau, but it becomes sluggish after
that. When td > 2 tau, it is better to use the dead-time tuning rule. It is as follows:
Kc = 0.36 / (gp * SM)
Ti = td / 3
No derivative.
SM is the stability margin and can be set to a value between 1 and 4. A value of 1 is equivalent to the
1/4-amplitude damping response. It is considered unsafe the loop is very sensitive to changes in
process conditions. A value of 2 or higher is recommended. It will reduce the overshoot, eliminate
unnecessary cycling, and make the loop far more robust to changes in process conditions.
Hint: measure dead time in the same units of time as your controllers integral setting. E.g. if your
controllers Ti setting is in minutes, measure td in minutes.
Notes:
- The tuning rules above are designed to work on controllers with interactive or non-interactive

algorithms, but not controllers with parallel algorithms.


- Furthermore, they will work only on controllers with a controller gain setting and not a proportional
band (found on Foxboro I/A controllers, for example).
- The rules assume the controllers integral setting is in units of time (minutes or seconds), and not
integral gain or rate (repeats per minute or repeats per second).
If your controller is different, parameter conversions will allow you to use these rules.
Applying the dead-time tuning rules to the process described above gives the following controller
settings: Kc = 0.2; Ti = 0.092 minutes. The result is significantly better than what can be obtained with
other tuning rules.

Dead-time dominant loop tuned with the Dead-Time tuning rules.

Better loop response can be obtained with a Smith Predictor, but that is more complex to implement,
more tedious to tune, sensitive to changes in process characteristics, and perhaps the topic of a future
blog.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 4. Controller Tuning

2 Responses to Tuning Rule for Dead-Time


Dominant Processes

Trevor:
December 12, 2012 at 1:42 pm
I am trying to calculate the Gp of the step response from the Dead Time Dominant process
graph and not getting your value. I see the CO goes from 40.5 to 45 and the PV goes from 50
to 54. Utilizing the Gp equation given on another page, Gp = %changePV/%changeCO.
((54-50)/50)/((45-40.5)/40.5) = 0.72. The number you listed above is 0.89. Am I calculating the
Gp incorrectly?

Jacques:
December 12, 2012 at 6:26 pm
Trevor You should convert the changes in CO and PV to a percentage of full scale. In this
case I did not state the scaling, but both the PV and CO are scaled 0 100%. So gp =
dPV[%] / dCO[%] = (54-50)/(45-40.5) = 4/4.5 = 0.89

Typical Controller Settings


October 10, 2010

If you design processes or control loops, you might have to come up with reasonable controller settings
before you have the chance of doing any tuning. If you are faced with this situation, you could use the
table of typical controller settings below to give you more appropriate starting values for a controller.
The table can also be used to validate tuning settings on problem control loops. Sometimes one setting
is so far off that simple trial-and-error tuning seems to make no difference. If you compare your
controller settings to those in this table, you might find the culprit setting.
Important:
Settings in this table are for information only. Process characteristics can vary widely from the typical
process, requiring greatly different controller settings. You should always tune a controller according to
the actual process dynamics (by doing step tests and applying the appropriate tuning rules)
before anyone places it in automatic control mode.
The controller settings are for controllers with the noninteractive algorithm, using controller gain for
proportional and minutes for integral and derivative. If your controller has a different algorithm or uses
different engineering units, you have to do the appropriate conversions.
If you have any questions or suggestions about the settings, please contact me.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Controller Setting
Flow and Liquid Pressure
Gain
Integral
Derivative
Filter
Sampling
Inline Temperature
Gain
Integral
Derivative
Filter
Sampling
Column or Reactor Temperature, Gas Pressure
Gain
Integral
Derivative
Filter
Sampling
Tight Level Control
Gain
Integral
Derivative

Filter
Sampling
Surge Tank Level
Gain
Integral
Derivative
Filter
Sampling
Composition
Gain
Integral
Derivative
Filter
Sampling
Posted in 4. Controller Tuning

4 Responses to Typical Controller Settings

Ravi Mishra:
February 9, 2013 at 5:56 am
I have absorbed that when steam temperature is below the set point (Set point: 560C,
Process value: 545C), the temperature control valve (spray valve) remain close but when
process value start to increase (By tilting up the burner or any other way) at that time
temperature control PID start to open the spray valve which affect the temperature raising.
I have asked this point with one of DCS engineer; he said that, PID follow the trends of
process value if it is in increasing trends then the temperature control PID will take action and
generates the out put.
Why this is happing, while we need to increase the temperature up to desire level. If the
temperature goes more than set point then only PID should take action.
This is happening not only in temperature loop even some other loop also like pressure
control.
Can you please explain me about these phenomena?

Jacques:
February 10, 2013 at 10:22 pm
Ravi,
Lets take a PI controller for simplicity:
CO = Kc (E + Int(E)/Ti), where CO is the controller output, E is the error (PV SP), Kc is the
controller gain, Int(E) is the integral of the error and Ti is the integral time. To simplify it further,
lets assume Kc = 1 and Ti = 1.
Then (Eq. 1): CO = E + Int(E).
Now if CO = 0, and E < 0, the Integral term will be blocked from further decrease to prevent
windup. So, a further simplification would be to make Int(E) constant.

You told me CO = 0, and E = 545 560 = -15. Substituting into Eq. 1, we find that Int(E) = 15.
So your integral term is sitting at +15, but it is balanced by the negative error of -15, so the
CO = 0.
Now, lets say the temperature increases to 550, then E = -10.
Now CO = -10 + 15 = 5.
And this is how your controller output can increase while you have the steam temp below
setpoint.
Think of it this way: When you approach a red traffic light with your car, you will begin to
decelerate before you get to the light. You wont wait until you have crossed over the light to
begin applying brakes. The PID controller does roughly the same thing. It begins taking action
before the error is 0.
- Jacques

Ravi:
February 22, 2014 at 6:04 am
Sir,
In first case E is ( -15) then according to Eq.-1
CO = -15+15 = 0
In second case when PV increase Then E become -10 but why Int(E) is still sitting at 15, it
should be also at Int(-10). It means integral action will run behind proportional action or it will
respond after some time (Lagging response).
Sir, how can I stop increasing of PID out put valve in this case where the PV is still lower than
SP and it is trying to reach the SP (PV is increasing like 550, 551, 552 and SP=560). PID
should start taking action when PV goes further beyond the SP (When PV560 and SP= 560).
I observed that when the Operator or control loop increase the burner tilt to increase the RH
temperature (to meet RH temperature set point), RH Spray control PID also start increase its
out put and open the spray valve which slow down the temperature rise. Same thing I have
seen in some other control loop also like pressure and temperature control loop. I want PID to
wait and allow PV to increase and meet the set point and if it continues to increase even
beyond the SP, then only PID should take action.
Please explain me any solution to eliminate this problem.

Jacques:
February 22, 2014 at 1:39 pm
A1: The proportional mode responds immediately on an error, the integral action occurs over
time. The slower your integral, the more pronounced the lag.
A2: It seems the problem is that your temperature controller does not know that the operator
has adjusted the burner tilt. You will need to bring the burner-tilt position into your temperature
control strategy.

When to Use which Tuning Rule


December 29, 2012

There are more than 400 tuning rules for PI and PID controllers [1]. How can one possibly choose the
best or most appropriate tuning rule from all of these? To simplify matters, the main differences between
the tuning rules can be grouped into four categories:
1.

Type of process

2.

Tuning objective

3.

Process information required

4.

Type of controller

Most of the tuning rules apply to first-order plus dead time (self-regulating) and integrator plus dead time
(integrating) process types. These two process types adequately cover the vast majority of control loops
in process plants. Other tuning rules apply to higher-order, oscillating, or unstable processes. Most of
the documented tuning rules apply only to processes with dominant time constants. This limits their
practical application. The Cohen-Coon tuning rules are an exception.
Tuning objectives include quarter-amplitude damping, minimization of some error integral, a specific
percentage overshoot, critically damped, robust tuning, and a specified closed-loop time constant. It is
rare to find a tuning rule with an adjustable tuning factor that allows you to change the speed of
response. The IMC / Lambda tuning rules are one exception.
The process information required for the tuning rules based on first-order plus dead time and integrator
plus dead time process types can be obtained by doing process step tests. A few tuning rules are based
on the ultimate cycling or relay tuning methods. Many of the academic tuning rules are based on highorder process models, but they never tell you how to obtain the process model; they just base the tuning
on some fictitious model chosen by the author, which largely makes them useless for practical
application.
Most tuning-rule authors developed tuning rules for both PI and PID controllers, but with no guidance
when to use which one. Some PID tuning rules apply to the interactive algorithm, while most apply to
the noninteractive algorithm. It is reasonably easy to convert from one type to the other.
To reduce all these complexities to something we can work with on most control loops, we can consider
two process types (self-regulating and integrating), and two tuning objectives (fast and slow or very
robust). And ideally we need an easy tuning factor to adjust the speed of response.
When to Use Which Tuning Rule
You could probably use any of the 400 tuning rules, as long as it applies to your situation. I have
successfully tuned most (but not all) control loops using just a few tuning rules. Here is what I
recommend for most loops:
For self-regulating processes, use the Cohen-Coon PI tuning rule with the following exceptions:

Use a stability margin of two or more to improve robustness and adjust speed of response.

If td > 4tau, use the tuning rule for dead-time-dominant processes.

If you find it difficult to accurately measure the dead time, use the Lambda tuning rule.

If you want the loop to have a specific speed of response, use the Lambda tuning rule.

If you want the loop to absorb disturbances rather than pass them on to the next process, use
the Lambda tuning rule with the closed loop time constant set three tomes the open loop time
constant.

Use the derivative control mode (PID tuning rule) only when you need every last bit of speed,
and then only when the process lends itself well to the use of derivative.

For integrating processes, use the Ziegler-Nichols tuning rule, except for surge tanks andlevel
averaging, where you should use the two tuning rules named after these control objectives.

Self-Regulating Process
Integrating Process
If you use a PID tuning rule and an interactive controller algorithm, or a controller with theparallel
algorithm, remember to convert the calculated tuning parameters to ones suitable for your controller
algorithm. Also remember to measure your process characteristics in the same time-units your
controllers integral uses. And remember to integral time to integral gain if that is what your controller
uses. Finally, when tuning any control loop, watch out for control valve problems.

You can find much more information in my book Process Control for Practitioners.

Stay tuned!
Jacques Smuts

Principal Consultant OptiControls

Reference
1.

ODwyer, A Summary of PI and PID Controller Tuning Rules for Processes with Time Delay.
Part 1: PI Controller Tuning Rules, Proceedings of PID 00: IFAC Workshop on Digital Control,
Terrassa, Spain, April 4-7, 2000, pp. 175-180.
Posted in 4. Controller Tuning, 9. Tips and Work-Process

Ziegler-Nichols Closed-Loop Tuning Method


March 31, 2010

J.G. Ziegler and N.B. Nichols published two tuning methods for PID controllers in 1942.
This article describes in detail how to apply one of the two methods, sometimes called the Ultimate
Cycling method. (The other one is called the process reaction-curve method.) I have seen many
cryptic versions of this procedure, but they leave a lot open for interpretation, and a practitioner may run
into difficulties using one of these abbreviated procedures.
Before we get started, here are a few very important notes:

Read the entire procedure before beginning.

This tuning method does not work for inherently unstable processes like temperature control of
exothermic reactions.

This procedure cannot be used if the Process Variable oscillates when the controller is in
Manual control mode. If the loop is already oscillating in Auto, make sure the cycling stops in
Manual.

If the controller drives a control valve or damper, and this device has dead band or
stiction problems, this tuning method cannot be used and will lead to inaccurate results and poor
tuning at best.

Care should be taken to always keep the process in a safe operating region.

An experienced operator should oversee the entire test and must have the authority to
terminate the test at any time.

Keep note of the original controller settings and leave them with the operator in case he/she
needs to revert back to them later. Process conditions can change significantly, and your new
tuning settings might only work for the conditions at which the process tests were done.

The steps below apply to a controller with a Controller Gain setting. If your controller uses Proportional
Band instead, do the reciprocal of any Controller Gain changes. E.g. if the procedure calls for increasing
the Controller Gain by 50%, the Proportional Band should be decreased by 50%, etc.
To apply the Ziegler-Nichols Closed-Loop method for tuning controllers, follow these steps:
1.

Stabilize the process. Make sure no process changes (e.g. product changes, grade changes,
load changes) are scheduled.

2.

If the loop is currently oscillating, make sure that the Process Variable stops oscillating when
the controller is placed in Manual mode.

3.

Remove Integral action from controller.


o

If your controller uses Integral Time (Minutes or Seconds per Repeat), set the Integral
parameter to a very large number (e.g. 9999) to effectively turn it off.

If your controller uses Integral Gain (Repeats per Minute or Repeats per Second), set
the Integral parameter to Zero.

4.

Remove Derivative action by setting the Derivative parameter to Zero.

5.

Place the controller in Automatic control mode if it is in Manual mode.

6.

Make a Set Point change and monitor the result.

7.

If the Process Variable does not oscillate at all, double the Controller Gain.

8.

If the Process Variable oscillates and the amplitude of the peaks decreases, increase the
Controller Gain by 50% (or less if you are getting close to a constant amplitude).

9.

If the Process Variable oscillates and the amplitude of the peaks increases, decrease the
controller gain by 50% (or less if you are getting close to a constant amplitude).

10.

If the Process Variable or Controller Output hits its upper or lower limits, decrease the controller
gain by 50%. The Process Variable and Controller Output must oscillate freely for this method to
work.

11.
12.

If the oscillations have died out, go to Step 6.


If the loop is oscillating, but not with a constant amplitude, repeat Steps 8, 9, and 10 until
oscillations with a constant amplitude are obtained.

13.

If the Process Variable is oscillating with constant amplitude, and neither the Process Variable
nor the Controller Output hits its limits, do the following:
o

Take note of the Ultimate Controller Gain (Ku). If your controller has Proportional
Band, note down the Ultimate Band (PBu).

Measure the period of the oscillation (tu). If your controllers Integral and Derivative
units are in minutes, measure tu in minutes. It the controller uses seconds, measure tu in
seconds.

14.

Cut the Controller Gain in half to let the control loop stabilize while you do the calculations.

15.

Calculate new controller settings using the equations below, enter them into the controller, and
make a Set Point change to test them.

The Ziegler-Nichols Closed-Loop Tuning Method

The Ziegler-Nichols tuning rules were designed for a amplitude decay response. This results in a loop
that overshoots its set point after a disturbance or set point change. The response in general is
somewhat oscillatory, the loop is only marginally robust and it can withstand only small changes process
conditions. I recommend using slightly different settings (also shown below) to obtain a robust loop with
increased stability.

Rules for a PI Controller


The PI tuning rule can be used on controllers with interactive or noninteractive algorithms.

Controller Gain (Kc)

Ziegler-Nichols Rule: Kc = 0.45 Ku

For robust control use: Kc = 0.22 Ku

Proportional Band (PB)

Ziegler-Nichols Rule: PB = 2.2 PBu

For robust control use: PB = 4.4 PBu

Integral Time in Minutes per Repeat or Seconds per Repeat

Ziegler-Nichols Rule: Ti = 0.83 tu

For level control (integrating processes) use: Ti = 1.6 tu

Integral Gain in Repeats per Minutes or Repeats per Seconds

Ziegler-Nichols Rule: Ki = 1.2 / tu

For level control (integrating processes) use: Ki = 0.6 / tu

Rules for a PID Controller


The PID tuning rule was designed for a controller with the Interactive algorithm. The tuning settings
should be converted for use on controllers with Noninteractive and Parallelalgorithms.
Controller Gain (Kc)

Ziegler-Nichols Rule: Kc = 0.6 Ku

For robust control use: Kc = 0.3 Ku

Proportional Band (PB)

Ziegler-Nichols Rule: PB = 1.7 PBu

For robust control use: PB = 3.3 PBu

Integral Time in Minutes per Repeat or Seconds per Repeat

Ziegler-Nichols Rule: Ti = 0.5 tu

For level control (integrating processes) use: Ti = 1.0 tu

Integral Gain in Repeats per Minutes or Repeats per Seconds

Ziegler-Nichols Rule: Ki = 2.0 / tu

For level control (integrating processes) use: Ki = 1.0 / tu

Derivative Time or Derivative Gain

Td or Kd = 0.125 x tu

Good luck, and if you have any questions, contact me.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 4. Controller Tuning

8 Responses to Ziegler-Nichols Closed-Loop


Tuning Method

Davor:

July 25, 2013 at 2:53 am


What do you mean with tuning settings should be converted for noniteractive ideal
algorithems. How to convert?

Jacques:
July 25, 2013 at 10:09 am
For PI control, no conversion is needed.
For PID control, to convert from interactive controller parameters to noninteractive:
Set the controller gain to Kc x (Ti + Td) / Ti
Set the integral time to Ti + Td
Set the derivative time to Ti x Td / (Ti + Td).

Davor:
July 30, 2013 at 1:08 am
Can I use the same parameters for nointeractive and parallel pid controller.

Jacques:
July 30, 2013 at 8:06 am
To convert from noninteractive controller parameters to parallel:
Set proportional gain (Kp) to Kc.
Set integral gain (Ki) to Kc/Ti, or for integral time (Ti) use Ti/Kc.
Set derivative gain (Kd) to Kc x Td.

Dreni:
September 22, 2013 at 2:55 pm
Hello Jacques,
Can you please help me understand how do you define the ultimate controller gain Ku?
Thanks in Advance
Dreni

Jacques:
September 22, 2013 at 3:38 pm
Ku is the controller gain that gives you the ultimate cycle. You determine it experimentally
through trial and error as described above. If the cycle amplitude increases, reduce the
controller gain. If the amplitude decreases, increase the controller gain. If the amplitude
remains constant, then controller gain = Ku.

Harini:
March 25, 2014 at 4:23 am
If the manufacturers DCS is designed such that the derivative term acts on PV and not on
error, will there be any change in the rules for PID parameter settings? If so, please provide
the same for Zeigler Nichols and Cohen coon methods.

Thanks in advance,
Harini.

Jacques:
March 26, 2014 at 3:48 pm
Harini, you can use the same rules without modification.

Ziegler-Nichols Open-Loop Tuning Rules


May 18, 2011

J.G. Ziegler and N.B. Nichols published two tuning methods for PID controllers in 1942*.

The Ultimate Cycling method, and

The Process Reaction-Curve method, often called the Ziegler-Nichols Open-Loop tuning
method.

This article describes the second method. But before we jump right into the tuning method, here are a
few important things you should know first.

Quarter-Amplitude Damping
The Ziegler-Nichols tuning methods aim for a quarter-amplitude damping response. Although the
quarter-amplitude damping type of tuning provides very fast rejection of disturbances, it makes the
loop very oscillatory, often causing interactions with similarly-tuned loops. Quarter-amplitude dampingtype tuning also leaves the loop vulnerable to going unstable if the process gain or dead time increases.
The easy fix for both problems is to reduce the controller gain by half. However, if the control objective
for the loop you are tuning is to have a very stable, robust control loop that absorbs disturbances, rather
use the Lambda tuning rules.

Designed for the Interactive Controller Algorithm


There are three types of PID controller algorithms: Interactive, Noninteractive, and Parallel. The ZieglerNichols tuning rules were designed for controllers with the interactive controller algorithm. If you are not
using the derivative control mode (i.e. using P or PI control), the rules will also work for the nonteractive
algorithm. However, if you plan to use derivative (i.e. PID control) and have a noninteractive controller,
or if your controller has a parallel algorithm, you should convert the calculated tuning settings to work on
your controller.

Use of Integral Time


The original Ziegler-Nichols tuning rules were designed for controllers using reset rate (integral gain in
repeats per minute) and not integral time (in minutes or seconds). However, virtually all the modern texts
on process control use integral time. This article follows that trend and uses integral time. If your
controller uses integral gain or reset rate, youll have to invert the calculated integral time (use 1/Ti).

Limited Range of Process Dynamics


The Ziegler-Nichols tuning rules work well on processes of which the time constant is at least two times
as long as the dead time. For example temperature and gas pressure. They work moderately poorly on
flow loops and liquid pressure loops where the dead time and time constant are about equal in length.

And they work very poorly on dead-time dominant processes. The Cohen-Coon tuning rules work
better on a wider range of processes.

Slight Modification for Self-Regulating Processes


The tuning method described below is actually a widely-used modification of the published ZieglerNichols Process Reaction Curve method. The reaction-curve method was designed for use on
integrating and self-regulating processes. The modified method works only on self-regulating processes,
but then more accurately so. The modified method is so popular that few people know about the original
reaction-curve method. I will describe the reaction-curve method in a future post, because it works very
well for integrating processes.

Tuning Procedure
Assuming the control loop is linear and the final control element is in good working order, you can
continue with tuning the controller. The Ziegler-Nichols open-loop tuning rules use three process
characteristics: process gain, dead time, and time constant. These are determined by doing a step test
and analyzing the results.

Step Test for Tuning (click to enlarge)

1.

Place the controller in manual and wait for the process to settle out.

2.

Make a step change of a few percent in the controller output (CO) and wait for the process
variable (PV) to settle out at a new value. The size of this step should be large enough that the
process variable moves well clear of the process noise/disturbance level. A total movement of five
times the noise/disturbances on the process variable should be sufficient.

3.

Convert the total change obtained in PV to a percentage of the span of the measuring device.

4.

Calculate the process gain (gp) as follows:


o

5.

gp = change in PV [in %] / change in CO [in %]


Find the maximum slope on the PV response curve. This will be at the inflection point(where

the PV stops curving upward and begins curving downward). Draw a line tangential to the PV
response curve through the point of inflection. Extend this line to intersect with the original level of
the PV (before the step-change in CO). Take note of the time value at this intersection.
6.

Measure the dead time (td) as follows:


o

td = time difference between the step-change in CO and the intersection described


above.

7.

Calculate the value of the PV at 63% (0.63) of its total change. On the PV reaction curve, find
the time value at which the PV reaches this level.

8.

Measure the time constant (Greek symbol tau) as follows:


o

tau = time difference between intersection at the end of dead time, and the PV
reaching 63% of its total change.

9.

Convert your measurements of dead time and time constant to the same time-units your
controllers integral mode uses. E.g. if your controllers integral time is in minutes, use minutes for
these measurements.

10.

Do two or three more step tests and calculate process gain, dead time, and time constant for
each test to obtain a good average of the process characteristics. If you get vastly different
numbers every time, do even more step tests until you have a few step tests that produce similar
values. Use the average of those values.

11.

Calculate settings for Controller Gain (Kc), Integral Time (Ti), and Derivative Time (Td), using
the Ziegler-Nichols tuning rules below. Note that these rules produce a quarter-amplitude damping
response and the calculated controller gain values should be divided by two.

12.

For P control: Kc = tau / (gp * td)

For PI control: Kc = 0.9 * tau / (gp * td); Ti = 3.33 * td

For PID control: Kc = 1.2 * tau / (gp * td); Ti = 2 * td; Td = 0.5 * td


IMPORTANT: If you have not already done so, divide the calculated controller gain (Kc) by two

to reduce overshoot and improve stability.


13.

Compare the newly calculated controller settings with the ones in the controller, and ensure
that any large differences in numbers are expected and justifiable.

14.

Make note of the previous controller settings, the new settings, and the date and time of
change.

15.

Implement and test the new controller settings. Ensure the response is in line with the overall
control objective of the loop.

16.

Leave the previous controller settings with the operator in case he/she wants to revert back to
them and cannot find you to do it. If the new settings dont work, you have probably missed
something in one or more of the previous steps.

17.

Monitor the controllers performance periodically for a few days after tuning to verify improved
operation under different process conditions.

*J.G. Ziegler and N.B. Nichols, Optimum settings for automatic controllers. Transactions of the ASME,
64, pp. 759768, 1942.

5. Control Valves
o
Butterfly Valves and Control Performance
o
Control Valve Linearization
o
Control Valve Problems
o
Equal Percentage Control Valves and Applications
o
Valve Diagnostics on a Level Loop

Butterfly Valves and Control Performance


February 23, 2012

Because butterfly valves cost less than real control valves like globe valves or characterized ball
valves, they are sometimes used in place of control valves to save money. This decision is often costly
in the long term because of the poor control performance resulting from butterfly valves.

Late last year I optimized several control loops at a mid-sized manufacturer of specialty
chemicals. Similar to most plants I have worked at, I found a number of control loops that were
oscillating. Many of them oscillated because of valve stiction, incorrect controller settings, or process
interactions. One of the loops, a distillation column level control loop, oscillated as a result of using a
butterfly valve as the final control element.

Figure 1. Oscillating level control loop.

To perform well, a PID control loop needs (among other things) that the process gain remains constant.
In other words, the process variable must change linearly with changes in controller output. A small
degree of nonlinearity can be tolerated, especially if we apply robust tuning methods, but if the process
gain changes by more than a factor of 2, we can expect control problems. And this is why a butterfly
valve makes a poor choice for a control valve it has a highly nonlinear, S-shaped flow curve, as shown
in Figure 2.

Figure 2. Typical butterfly valve flow characteristic.

Figure 3 shows how the gain of a typical butterfly valve changes from less than 0.2 to almost 3 over the
span of the controller output. The process gain varies by a factor of 15! This large variation in process
gain makes it impossible to have consistently good control at all valve positions.

Figure 3. Typical butterfly valve gain.

At the chemical company the butterfly valve was used to control the bottom level of a distillation column.
The distillation column was the last one in a train of three columns, of which each column had
a progressively smaller diameter. Moderate increases in feed rate to the first column easily caused highlevel alarms when they propagated to the small final column. The level controller originally seemed to be
responding too slowly to handle these upsets, so the loop tuner increased the controller gain to achieve
fast response at high flow rates. However, at normal flow rates, where the process gain was 15 times
higher, the loop was unstable and oscillated continuously as shown in Figure 1.
The correct solution to this problem would have been to replace the butterfly valve with a control valve
that has a linear flow characteristic and then retune the control loop. However, this could only be done
during the plants annual maintenance shutdown. In the mean time we installed a characterizer to
linearize the butterfly valve (Figure 4). The characterizer compensated for the butterfly valves
nonlinearity and made the flow through the valve follow the controller output in a reasonably linear
fashion.

Figure 4. Level control loop with characterizer.

With the characterizer in place we retuned the controller. After this the oscillations stopped and the loop
performed much better than it did before. However, the control performance was still not as good as
what a linear control valve would have provided. The real solution to the problem remained replacing the
butterfly valve with control valve, but this had to wait for the next maintenance shutdown.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 5. Control Valves, 6. Loop Performance, Problems, and Diagnostics, 7. Control Strategies,8.
Case Studies

3 Responses to Butterfly Valves and Control


Performance

mohamed elsadig:
May 18, 2013 at 11:11 pm
Dear Jacques,
would you please the implementation of the Characterizer is in term of software (functional
block or hardware).
thank you
mohamed

Jacques:
May 19, 2013 at 7:59 am
Mohamed, in this case we implemented the characterizer in the DCS using a function block.
You could also do the characterization in the valve positioner if the positioner supports it (most

digital valve positioners do). My preference is to do it in the control system because if the
positioner is replaced, the new positioner might be put in service without the characterizer.

H.B.R:
May 27, 2014 at 2:16 am
Dear, Mr. Jacques.
Im an I&C engineer working for an EPC contractor. In the site where I worked last year, there
was the exactly same problem as described in the figure no.4 of this article.
The process was distillate water level and it was controlled by the butterfly type control valve.
The process value was hunted like figure no.1 but with bigger magnitude. I think the problem
was because of the actuator or positioner. When PID output sent a demand signal to the
valve, the positioner feedback value followed the demand signal after 1~2 seconds.(i,e. there
was a deadtime in the actuator.) I couldnt detect that it was caused by the sticky actuator or
positioner problem because the plant was under the commercial operation.(now, Im working
the head office and that problem is still remaining in the site.)
Can you imagine the status of the process?? Its oscillated with really big magnitude.(bigger
than figure no.1)
The action I did during the commissioning was only adjusting PID value at that time due to the
tight commissioning time. After that, only the oscillation magnitude became smaller and the
symptom alarming every minute disappeared but still big oscillation has existed.
Thank you for your helpful information.

Control Valve Linearization


November 26, 2011

A control valves flow characteristic is an X-Y curve that maps the percentage of flow youll get for any
given valve opening (Figure 1). The design characteristic (also called inherent flow characteristic) of a
valve assumes a constant pressure differential across the valve. More relevant to us is the installed
characteristic, which is the way the valve operates in the real process. The installed characteristic of a
valve can be determined by plotting the measured flow rate at different valve openings. You can do tests
on the live process to get this data, or you can get it from the process historian (make sure you use
steady-state data).

Figure 1 - A Nonlinear Flow Characteristic

The installed flow characteristic of a control valve directly affects the process gain. It is essential that the
installed characteristic is linear (the above plot is a straight line) so that the process gain is constant,
regardless of the controller output. If the gradient of the curve varies by more than a factor of two,
control loop performance will be noticeably affected. If nothing is done to linearize the valve the
controller will have to be detuned to accommodate the maximum process gain. This leads to sluggish
control loop response over much of the valves operating range.
A nonlinear flow characteristic should be linearized to obtain good control performance throughout the
valves operating range. This is done with a linearizer (also called a characterizer). The linearizer is a

control block, function generator, f(x) curve, or a lookup table, placed between the controller and the
valve (Figure 2). Although the linearization can be done in a digital positioner, the DCS/PLC is the best
location for it. This allows replacement of the positioner without having to reprogram the linearization
curve in the new positioner.

Figure 2 - Linearizing a Nonlinear Valve Characteristic

Linearization is done with an X-Y curve or function generator that is configured to represent the
reciprocal (inverse) of the control elements flow curve (Figure 3).

Figure 3 - How a Linearizer Works

To design the linearizer, you have to first determine the flow characteristic curve of the valve operating in
the actual process. For this you should take readings of the flow or process variable (PV) and controller
output (CO) under steady-state conditions at various controller output levels. You need a minimum of
three (PV, CO) data pairs for this, but four or five would be better for characterizing a nonlinear
relationship.
Make sure you span the entire operating range of the controller output, and try to obtain readings
spaced equally across the controller output span. You can do process tests to obtain these values, or
examine data from your process historian. Then convert the process variable data from engineering
units to a percentage of full scale of the measurement.
Sort the data pairs in ascending order, and enter them into a function generator. The PV readings in
percent become the X values (input side) and the CO readings become the Y values (output side).
Include a (0, 0) point if you dont already have one in your dataset and be sure to estimate a (100, Y)
point also if you dont have one. Also, if your valve opens as the CO decreases, your Y column will
obviously have to reflect this.
For example, you get the following (PV, CO) pairs form historical data: (120, 22); (280, 39); (530, 63).
The PV is ranged 0 to 1000 kg/hr. You plot the data and estimate that 1000 kg/hr will occur at about
85%. The characterizer will look like this:

After implementing a linearizer in the DCS or PLC, you can test its accuracy by checking whether the
controller output and flow measurement are roughly at the same percentage of full scale. For example:
20% and 50% controller output should result in roughly 20% and 50% flow rate. You should retune the
controller after implementing the linearizer because it likely had changed the process gain.
Although this discussion mentioned only control valves, the same applies to other final control elements,
like vanes, dampers, feeders, etc.

Stay tuned!

Jacques Smuts
Author of the book Process Control for Practitioners

Posted in 5. Control Valves, 7. Control Strategies

3 Responses to Control Valve Linearization

Jack:
April 29, 2012 at 1:58 pm
We currently have a non-linear characterization on one of our boiler air dampers. I didnt quite
understand the purpose of the characterization until I read this article. It was a good thing I did
too, because I almost removed the characterization. Thanks.

siby:
March 8, 2013 at 7:01 am
I have read about some situations where the control valves are deliberately chosen to have
non-linear behavior like an equi-percentage characteristic because the process it is controlling
is also non-linear. Will introducing a linearizer then adversely affect the loop performance?

Jacques:
March 8, 2013 at 7:19 am
Siby, if the nonlinear flow characteristic of the control valve cancels out the nonlinear
characteristic of the process, the combination of the two should be linear and no
characterization is required. For example, the steam flow control valve to a heat exchanger is
likely better being equal percentage than linear.

Control Valve Problems


February 22, 2010

Control valve

Control valve problems can severely affect control loop performance and, unless eliminated, they can
make controller tuning a challenging (sometimes impossible) task. Some problems are quite obvious to
the trained eye and can easily be detected by loop performance assessment software. Others can be
more difficult to detect without running specific tests. When doing on-site services, I always make sure
to watch out and/or test for valve problems.
Four problems with control valves are found at a high frequency in poorly performing control loops.
These are:
- Dead band
- Stiction
- Positioner overshoot
- Incorrect valve sizing
- Nonlinear flow characteristic
Lets take a closer look at each of these problems.
Dead band
A valve with dead band acts like there is some backlash between the controller output and the actual
valve position. Every time the controller output changes direction, the dead band has to be traversed
before the valve physically starts moving. Although dead band may be caused by mechanical backlash
(looseness or play in mechanical linkages), it can also be caused by excessive friction in the valve, an
undersized actuator, or a defective positioner.
Many people use the term Hysteresis instead of dead band (I used to be one of them). But
theISA and Wikipedia define hysteresis as something else. The ISA clearly calls the mechanical
backlash phenomenon in control valves dead band.

A control valve with dead band will cause oscillations in a level loop under PI or PID control if the
controller directly drives the control valve (non-cascade). A control valve with dead band can also cause
oscillations after a set point change in control loops on self-regulating processes especially if the
integral action of the controller is a little excessive.
Stiction
Another very common problem found in control loops is stiction. This is short for Static Friction, and
means that the valve internals are sticky.

If a valve with stiction stops moving, it tends to stick in that position. Then additional force is required to
overcome the stiction. The controller continues to change its output while the valve continues to stick in
position. Additional pressure mounts in the actuator. If enough pressure builds up to overcome the static
friction, the valve breaks free. The valve movement quickly absorbs the excess in pressure, and often
the valve overshoots its target position. After this, the valve movement stops and the valve sticks in the
new position.

Frequently, this overshoot in valve position causes the process to overshoot its set point. Then the valve
sticks at the new position, the controller output reverses direction and the whole process repeats in the
opposite direction. This causes an oscillation, called a stick-slip cycle. If loop oscillations are caused by
stiction, the controller outputs cycle often resembles a saw-tooth wave, while the process variable may
look like a square wave or an irregular sine wave.

Stiction might be caused by an over-tight valve stem seal, by sticky valve internals, by an undersized
actuator, or a sticky positioner.
Positioner Overshoot
One control valve problem that is more common now than a decade ago, is that of positioner overshoot.
Positioners are fast feedback controllers that measure the valve stem position and manipulate the valve
actuator until the desired valve position is achieved. Most positioners can be tuned. Some are tuned too
aggressively for the valve they are controlling. This causes the valve to overshoot its target position after
a change in controller output. Sometimes the positioner is simply defective in a way that causes
overshoot. If the process controller is also tuned aggressively, the combination with positioner overshoot
can cause severe oscillations in the control loop.

Valve Sizing
The fourth common problem with control valves are oversized valves. Valves should be sized so that full
flow is obtained at about 70%-90% of travel, depending on the valve characteristic curve and the service

conditions. In most cases, however, control valves are sized too large for the flow rates they need to
control. This leads to the valve operating at small openings even at full flow conditions. A small changes
in valve position has a large effect on flow. This leads to poor control performance because any valve
positioning errors, like stiction and dead band, are greatly amplified by the oversized valve.
Nonlinearity
A valve with a nonlinear flow characteristic can also lead to tuning problems. A control valves flow
characteristic is the relationship between the valve position and the flow rate through the valve under
normal service conditions. Ideally the flow characteristic should be linear. With a nonlinear
characteristic, one can have optimal controller response only at one operating point. The loop could
become quite unstable or sluggish as the valve position moves away from this operating point.

Conclusion
Before attempting to tune a control loop, check the valve for dead band, stiction and nonlinearity and
have all problems attended to. This could save hours of effort tuning a loop in which the control valve is
actually the item needing attention.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 5. Control Valves, 6. Loop Performance, Problems, and Diagnostics

2 Responses to Control Valve Problems

mandar:
July 30, 2011 at 9:47 am
Hi, what will happen if we u 4 solenoid valve,4 separate regulators with range 02,0.4,0.6 and
0.8 and 1kg and 4 controller input instead of 1 positioner to change position of valves.

Jacques:
July 30, 2011 at 10:40 am
With the solenoid setup you describe, the system will come close to set point, but at minimum
the 0.2 kg solenoid will keep on opening and closing, causing an oscillation around the set
point.
- If this is a problem, you can implement a dead band (slightly larger than 0.2 kg) in the
control logic. The system may not be on set point, but it will not oscillate.
- If you need to run the system at set point with no oscillations, you should use a control valve
with an actuator (not a solenoid) so the contoller can change the valve position by tiny
fractions required to keep the process variable to set point.
Good luck!
Jacques

Equal Percentage Control Valves and Applications


April 26, 2013

Far too often, equal percentage control valves are found in applications where linear control valves
should have been used. This article explains equal percentage control valves and sets guidelines for
their use.
What is an Equal Percentage Control Valve?
The relationship between valve stem position and the flow rate through a control valve is described by a
curve called the valves flow characteristic curve, or simply the valve characteristic. An equal percentage
flow characteristic is a nonlinear curve of which the slope increases as the valve opens, while a linear
flow characteristic is a straight line (Figure 1).

Figure 1. Equal percentage and linear flow characteristics.

Control valves manipulate the rate of liquid/gas flow through them by altering the open area through
which the liquid/gas passes. Linear valves increase the open area linearly with valve travel, while equal
percentage valves open progressively more area with valve travel (Figure 2).

Figure 2. Port shapes of linear and equal percentage valves.

Why do we need Equal Percentage Valves?


PID controllers are linear devices and for optimal performance, the process should behave linearly too.
That is, if the controller output changes from 10% to 20%, the process should respond just as much as it

would if the controller output changes from 80% to 90%. From this requirement, it seems that linear
control valves should be sufficient.
However, up to now we have been talking about the inherent/design flow characteristic of control valves.
This is the flow characteristic that a valve exhibits if the pressure difference across it remains constant
throughout its operating range. But in practice this is often not the case. The pressure difference across
a valve is often a function of flow, and it changes with valve position. Consequently, the inherent flow
characteristic is often distorted by the process and we refer to the resulting curve as the installed valve
characteristic.
So we have to refine our linearity requirement to reflect the installed valve characteristic. Sometimes we
need to use a control valve with an equal percentage inherent characteristic to obtain a linear installed
characteristic. Two distinctly different scenarios follow.
Scenario 1a
Consider a centrifugal pump for providing pressure, and a control valve for controlling the flow (Figure
3). As the pump delivers more flow, its capability for generating pressure decreases. Therefore the
pressure differential across the control valve is high and low flow rates; and it is low at high flow rates.
An equal percentage valve can offset this change in differential pressure to exhibit a more linear
installed characteristic.

Figure 3. Simple flow control loop with centrifugal pump.

Scenario 1b
However, we cant just assume that because we have a centrifugal pump, we need an equal percentage
valve. If the system pressure (backpressure) downstream of the valve remains high, for example when
pumping into a pressurized system, the pump will likely stay high on its curve, and the pressure across
the control valve will not change appreciably. In this case a linear valve might be a better choice.
If we consider the pressure differential across the valve versus flow, we can make the right choice in
Scenarios 1a and 1b. If the pressure differential remains reasonably constant, a linear valve is required
(but please read Scenario 2 below). If the pressure differential drops by more than 50%, equal
percentage can provide better linearity. To remove the guesswork, use valve-sizing software. The
software should allow you to specify a few pressure-differential versus flow points and based on that,
it will recommend the best valve for the application.
Scenario 2
Lets consider a steam-condensing heat exchanger (Figure 4). The pressure upstream of the valve is
kept constant by the boiler and steam pressure controller. The pressure downstream of the valve is

determined by the condensate temperature, which is roughly equal to the outlet temperature, which is
controlled to a constant setpoint.

Figure 4. Steam-condensing heat exchanger.

In other words, the pressure differential across the steam control valve remains relatively constant,
regardless of the flow. Should we then use a linear valve. Well, we should actually use ratio control in
which we control the steam flow rate as a ratio of the process flow rate and use a linear valve, but that is
another story. Most heat exchanger control designs are as simple as shown in Figure 4.
Even though the constant differential pressure across the valve calls for a linear control valve, this
process calls for an equal percentage valve. At low process flow rates, the outlet temperature is very
sensitive to changes in steam flow. At high process flow rates, the steam flow must be changed much
more to affect the heater outlet temperature to the same degree. This can be accomplished by using an
equal percentage control valve. At small valve openings, the valve sensitivity is very low, which cancels
the high sensitivity of the process. The valve sensitivity increases as the valve opens more which is
exactly what is required because the sensitivity of our heat exchanger decreases with increased
process flow rates.
Conclusion
An equal percentage control valve should be used when the pressure differential across the
valve decreases with increases in flow rate. Valve sizing software should be able to find the right valve
characteristic for the job. Also, equal percentage control valves should be used in control loops of which
the process gain decreases with increases in flow rate. If none of these conditions apply, the loop is
likely better off with a linear control valve.
Stay tuned!
Jacques Smuts
Principal consultant of OptiControls, and author of Process Control for Practitioners.
Posted in 5. Control Valves

Valve Diagnostics on a Level Loop


April 10, 2012

Determining the condition of the control valve on a level loop can be challenging, but it is an important
aspect of successful tuning. Control loop performance can be greatly affected bycontrol valve dead
band, stiction, and nonlinear flow curve, and this is no different on a level control loop. If the level
controller directly drives the control valve, both dead band and stiction will cause a level control loop to
oscillate continuously.

A level control loop oscillating because of control valve dead band.

Doing valve diagnostic tests is easy on a self-regulating loop, but not so on an integrating loop, like
liquid level. On a level loop the process variable itself does not in any way reflect the actual control valve
position (or the flow into/out of the vessel for that matter). So how then does one determine the
condition of a control valve in a level loop?
The answer is to analyze the rate of change of the level at different control valve positions. For example,
if you want to check for control valve dead band, you would put the controller in manual, make two
controller output steps (typically 5% in size) in one direction, and make another step in the opposite
direction just like on a flow control loop, for example. However, instead of using the level
measurement directly, you would analyze the rate of change of the measurement.

A dead-band test on a level loop. Normally the controller output steps would be equal in size, but very often
smaller steps are required to keep the level within limits.

This takes some planning because the level always needs to be kept within a safe operating range and
you need to allow enough time between controller output steps to obtain steady ramps in level that are
long enough that you can take measurements from them. I often have to adapt my test plan to keep the
process safe but still obtain the data I need for analysis. Dont worry if your steps are not all the same
size, the calculation below will compensate for this.
Once you have collected your test data, return command of the controller back to the operator to be
placed in auto and/or monitored. Import the controller output (CO) and process variable (PV) data into
Microsoft Excel or your spreadsheet of choice.
Add a new column of calculations that take the difference (dPV) between two successive PV samples,
e.g.: C2 = B3 B2, assuming your PV data is in Column B and the dPV calculations are in Column C.
You will likely find that the new dPV data is very noisy. In that case you should include an averaging filter
in your calculation like this:

Calculating a filtered rate of change.

Once youre done you can plot the data and take measurements from the plot or the data. Use the
average of the dPV values as shown below.

Analyzing dead band on a level loop.

You can then calculate the dead band in the valve, as a percentage of its full travel (0 100% open):
% Valve Dead Band = (CO3 CO2) (dPV3 dPV2) / (dPV2 dPV1) x (CO2 CO1)

A stiction analysis can be done in the same way, but you will make 5 to 10 small changes in controller
output (typically 0.5% in size). Remember to leave enough time between successive steps to obtain a
steady gradient. Make sure that you take up any dead band first by preceding the small steps with one
large dead-band-eliminating step in the same direction as you are planning for the small stiction steps.
Since a level loops will oscillate if its output drives a valve having dead band or stiction, you want these
to be as low as possible. Installing a flow controller as an inner loop to the level controller in a cascade
control arrangement will go a long way to reduce the effects of valve issues on a level control loop.
Stay Tuned!
Jacques Smuts, author of the book Process Control for Practitioners

Posted in 5. Control Valves, 6. Loop Performance, Problems, and Diagnostics

6. Loop Performance, Problems, and Diagnostics


o
An Oscillating Level Control Loop

o
o
o
o
o
o
o

Butterfly Valves and Control Performance


Caster Level Control Improvement
Control Loop Performance Monitoring
Control Valve Problems
Diagnosing and Solving Control Problems
Q&A on Loop Performance
Valve Diagnostics on a Level Loop

An Oscillating Level Control Loop


August 3, 2011

When I do onsite control loop optimization services I often see level controllers oscillating. Most often
they oscillate because of one or more of the following reasons:

The control valve has a dead band. (Yes, level loops with dead band oscillate continuously if
you are using a PI or PID controller.)

The control valve has stiction.

The integral time is set too short for the amount of controller gain being used.

However, these are not the only problems and I have often been amazed at the actual cause of
oscillations.
So to keep me from guessing, I systematically analyze a loop for problems before I tune the controller. I
always try to follow the same basic sequence of tests, and then delve deeper into any problems I notice.
The sequence of tests:
1.

See how the loop performs in automatic control under normal operating conditions.

2.

Do a set point change (this is very helpful for various reasons that Ill write about in future).

3.

Place the controller in manual.

4.

Do various valve performance tests (these can be quite challenging on a level loop).

5.

Assuming no insurmountable problems were found, do step-tests for tuning (if you dont have
enough data already).

6.

Tune the controller and repeat steps 2 and 1 to ensure the loop meets its performance
objectives.

Some (non-readers of this blog) may try to address all control problems with tuning. But the simple
steps listed above have served me very well over the years and I often smile to think that someone
could be wasting hours of fruitless tuning if a loop really has other problems.

Case in Point
A few weeks ago I was optimizing the control performance of loops an oil platform in the Gulf of Mexico.
Quite early in the project I got to a level control problem on one of their separator vessels. Step 1 of my
test sequence revealed the oil level control loop was oscillating. The period of the oscillations was
slightly shorter than one minute.

Level loop oscillating.

Going on to Step 2, we made a set point change. I noticed the loop actually performed very well on the
change in set point (ignoring the oscillation). From that I concluded the problem is not the controllers
tuning.
I also noticed that the process variable took about two minutes to cross over its set point for the first
time and about six minutes to settle out at set point. This meant the response time of the loop was far
slower than the period of its oscillations. It would be impossible for stiction or dead band to cause the
loop to oscillate with a one-minute period if it takes the loop so much longer to reach set point. Although
I would later test for stiction and dead band, I basically ruled them out as causes of the oscillation.

Loop performed well on a set point change.

Step 3 calls for placing the controller in manual control mode. This provides a good test to see if the
oscillations are caused by something in the control loop. We placed the loop in manual, and the
oscillations continued. At this point I concluded that the oscillations were not caused by controller tuning,
stiction, or dead band.

Oscillations continued with controller in manual.

So what could it be? A quick inspection of the level control valve indicated that the valve was rock-solid
in holding its position with the controller in manual. The oscillations were not coming from the valve and
therefore they had to be coming from the process. We looked at time trends of the flow rate into the
separator vessel and the gas pressure inside the vessel, but these were not oscillating (at least not at
one-minute periods).
Then we found the cause. The vessel is a three-phase separator: gas, oil, and water. The oil floats on a
layer of water in the bottom of the separator. It was the oil-water interface level that was oscillating,
moving the oil level up and down with it. After some more investigation, we found the water level control

loop was operating virtually in on-off control mode. Only then could we focus on solving the real
problem.
We are all sometimes tempted to tweak controller settings without looking any further, but a systematic
approach to analyzing control loops and solving control problems really pays off.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 6. Loop Performance, Problems, and Diagnostics, 8. Case Studies

4 Responses to An Oscillating Level Control Loop

Mary:
September 26, 2011 at 11:08 am
How did you solve the problem?

Jacques:
September 26, 2011 at 8:25 pm
We did step tests and properly tuned the water level controller. That solved the problem.

Nhan:
September 25, 2012 at 1:56 am
Which tuning rule did you apply for the oil level control loop, level averaging or lambda?
Thanks

Jacques:
September 25, 2012 at 3:07 pm
Nhan, the tuning rule of choice should always depend on the application:
- For fast response, the Ziegler-Nichols rules for integrating processes work well, provided
you divide the controller gain by two, and multiply the integral time by two.
- For slow response, I recommend level-averaging (I still need to write an article about it).
In the case of the oil level, I proposed using level averaging to make maximum use of the
surge capacity of the separator, but the operations personnel wanted the oil to stay as close
to setpoint as possible (fast control). So we ended up using Z/N for tuning the oil level loop.
I think Lambda tuning for levels is good as an academic exercise, but I dont see its need for
tuning level controllers, and I have never used it for tuning levels. However, theLambda tuning
rules certainly have a place with self-regulating processes.

Butterfly Valves and Control Performance


February 23, 2012

Because butterfly valves cost less than real control valves like globe valves or characterized ball
valves, they are sometimes used in place of control valves to save money. This decision is often costly
in the long term because of the poor control performance resulting from butterfly valves.
Late last year I optimized several control loops at a mid-sized manufacturer of specialty
chemicals. Similar to most plants I have worked at, I found a number of control loops that were
oscillating. Many of them oscillated because of valve stiction, incorrect controller settings, or process

interactions. One of the loops, a distillation column level control loop, oscillated as a result of using a
butterfly valve as the final control element.

Figure 1. Oscillating level control loop.

To perform well, a PID control loop needs (among other things) that the process gain remains constant.
In other words, the process variable must change linearly with changes in controller output. A small
degree of nonlinearity can be tolerated, especially if we apply robust tuning methods, but if the process
gain changes by more than a factor of 2, we can expect control problems. And this is why a butterfly
valve makes a poor choice for a control valve it has a highly nonlinear, S-shaped flow curve, as shown
in Figure 2.

Figure 2. Typical butterfly valve flow characteristic.

Figure 3 shows how the gain of a typical butterfly valve changes from less than 0.2 to almost 3 over the
span of the controller output. The process gain varies by a factor of 15! This large variation in process
gain makes it impossible to have consistently good control at all valve positions.

Figure 3. Typical butterfly valve gain.

At the chemical company the butterfly valve was used to control the bottom level of a distillation column.
The distillation column was the last one in a train of three columns, of which each column had

a progressively smaller diameter. Moderate increases in feed rate to the first column easily caused highlevel alarms when they propagated to the small final column. The level controller originally seemed to be
responding too slowly to handle these upsets, so the loop tuner increased the controller gain to achieve
fast response at high flow rates. However, at normal flow rates, where the process gain was 15 times
higher, the loop was unstable and oscillated continuously as shown in Figure 1.
The correct solution to this problem would have been to replace the butterfly valve with a control valve
that has a linear flow characteristic and then retune the control loop. However, this could only be done
during the plants annual maintenance shutdown. In the mean time we installed a characterizer to
linearize the butterfly valve (Figure 4). The characterizer compensated for the butterfly valves
nonlinearity and made the flow through the valve follow the controller output in a reasonably linear
fashion.

Figure 4. Level control loop with characterizer.

With the characterizer in place we retuned the controller. After this the oscillations stopped and the loop
performed much better than it did before. However, the control performance was still not as good as
what a linear control valve would have provided. The real solution to the problem remained replacing the
butterfly valve with control valve, but this had to wait for the next maintenance shutdown.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 5. Control Valves, 6. Loop Performance, Problems, and Diagnostics, 7. Control Strategies,8.
Case Studies

3 Responses to Butterfly Valves and Control


Performance

mohamed elsadig:
May 18, 2013 at 11:11 pm
Dear Jacques,
would you please the implementation of the Characterizer is in term of software (functional
block or hardware).
thank you
mohamed

Jacques:
May 19, 2013 at 7:59 am
Mohamed, in this case we implemented the characterizer in the DCS using a function block.
You could also do the characterization in the valve positioner if the positioner supports it (most
digital valve positioners do). My preference is to do it in the control system because if the
positioner is replaced, the new positioner might be put in service without the characterizer.

H.B.R:
May 27, 2014 at 2:16 am
Dear, Mr. Jacques.
Im an I&C engineer working for an EPC contractor. In the site where I worked last year, there
was the exactly same problem as described in the figure no.4 of this article.
The process was distillate water level and it was controlled by the butterfly type control valve.
The process value was hunted like figure no.1 but with bigger magnitude. I think the problem
was because of the actuator or positioner. When PID output sent a demand signal to the
valve, the positioner feedback value followed the demand signal after 1~2 seconds.(i,e. there
was a deadtime in the actuator.) I couldnt detect that it was caused by the sticky actuator or
positioner problem because the plant was under the commercial operation.(now, Im working
the head office and that problem is still remaining in the site.)
Can you imagine the status of the process?? Its oscillated with really big magnitude.(bigger
than figure no.1)
The action I did during the commissioning was only adjusting PID value at that time due to the
tight commissioning time. After that, only the oscillation magnitude became smaller and the
symptom alarming every minute disappeared but still big oscillation has existed.
Thank you for your helpful information.

Caster Level Control Improvement


January 31, 2013

Recently, I helped a foundry with a level control problem in their casting process. A batch of metal is
melted in a furnace, after which the furnace is slowly tilted to pour the metal into a trough above the
caster. The level of molten metal in the caster trough must be kept constant so that the metal flows into
the mould at a constant rate. This is done by manipulating the tilt rate of the furnace. The foundry had
problems maintaining a constant level in the caster trough. An investigation of the system and
equipment revealed the problem.
System Description
The level of the molten metal in the casting trough is measured with a non-contact level sensor and sent
to a PID controller. The controller compares the level to its setpoint and manipulates the valve that
controls the furnaces tilt rate (Figure 1). If the level is below setpoint, the PID controller opens the valve
more and the furnace tilts faster. Likewise, if the level is above setpoint, the valve position is reduced.

Figure 1. Caster Trough Level Control (click to enlarge)

The Problem

The tuning parameters of any PID controller should be set according to the gain and dynamics of the
process it is controlling. A control loop can tolerate small changes in process characteristics, but large
changes will cause poor control, unless the control design somehow compensates for this. And herein
lay the problem during the casting process the process gain changed vastly.
At the beginning of the cast, when the molten metal in the furnace has a large surface area, a 1 change
in tilt angle will pour a large quantity of metal into the caster trough. At the end of the cast, when the
furnace is tilted significantly and the molten metal has a small surface area, a 1 change in tilt angle will
pour only a small quantity of metal (Figure 2). This causes the process gain to change by a factor of
almost 10 during the casting process.

Figure 2. Origin of Process Nonlinearity

It is impossible to have good feedback control from a simple control loop if the process gain changes
this much. The loop performance will range from being close to instability (when the process gain is high
at low tilt angles early in the cast) to being very sluggish (when the process gain is low at high tilt angles
late in the cast). This is why the foundry had so much trouble with this control loop.
The Solution
The solution was to either use gain scheduling on the controller or to implement a linearizer between the
controller output and the process. Both of these would essentially keep the loop gain constant by either
changing the controller gain based on tilt angle, or by compensating for the nonlinear process gain at
different tilt angles. To simplify tuning, we chose the linearizer. The linearizer would multiply the
controller output by a certain factor that would be changed automatically, based on the furnaces tilt
position (Figure 3).

Figure 3. Level Control Improvement through Linearization

We used trigonometry to calculate the appropriate multiplier for different tilt positions and implemented
this into a function generator block in the control system. After this the loop was linear and the control
performance vastly improved.

When tuning control loops, it is always important to understand the process and its characteristics, and
how these characteristics might change in relation to the process conditions. A process control
practitioner should always look for the true reason of poor control. In many cases this goes far beyond
controller tuning.

Find out more about process nonlinearity, gain scheduling, controller tuning, and much more in my
book Process Control for Practitioners.
Stay tuned!

Control Loop Performance Monitoring


August 8, 2010

Control loop performance directly affects the operability and profitability of industrial plants. Considering
the importance of control loops, one would expect that they always perform at their peak, but this is not
the case. In fact, several studies have shown that roughly one third of industrial control loops perform
poorly.
Poorly performing control loops can make a plant difficult to operate and may have several costly sideeffects, including:

Reduced production rate

Increased emissions

Lower efficiency

Plant trips following process upsets

Poor product quality

Slower startup and transition times

More off-spec product or rework

Premature equipment wear

For these reasons, control loop performance should always be kept at the highest possible level. This is
achieved through continuously monitoring loop performance and taking the appropriate corrective
actions when sub-optimal performance is detected.
Loop Performance Monitoring
To effectively manage, improve, or sustain control loop performance, it is important to monitor how well
loops perform. Loop performance monitoring can provide valuable feedback on the success of control
optimization projects; it helps maintaining a high standard of loop performance in the long run; and it can
be used to pinpoint offending control loops for corrective action.
Loop Performance Assessment
Loop performance should be evaluated from various perspectives. A control loop needs to be in its
correct mode (mostly auto), stable and responsive, and must reduce process variability. Loop
performance can be calculated in terms of these perspectives and expressed as a numerical value, or
metric. The following metrics are essential for assessing the performance of the control loop:

Percentage of time the controller is not in its correct mode

Percentage of time the controller output is at its limits

Standard deviation in error

Tendency of loop to oscillate

Controller responsiveness

Many other metrics can provide useful additional information on the performance of the control loop,
control valve, and measurement device, for example:

Process variable noise

Cumulative control valve travel per day (can be used for predictive maintenance)

Number of direction changes in control valve travel per day

Mean value of controller output (can be used to indicate oversized and undersized valves or
incorrectly ranged transmitters)

Number of times the operator changed controller mode and/or output

Number of tuning constant changes on the controller

In most cases it is sufficient to calculate loop performance metrics daily. These metrics should be
averaged over a period of a week to obtain KPIs for average loop performance over the week.
Some metrics can be easily computed in the process historian, while others are best done by control
loop performance monitoring software applications. The monitoring process should run autonomously
and automatically by using control loop performance monitoring software, or by writing a custom
application in the process historian (process information management system).
Overall Loop Health
Each metric can be compared to a threshold for proper loop performance. If one or more metrics exceed
their threshold, the loop should be flagged as having poor performance and maintenance or engineering
staff should attend to the problem.
Practical Matter
It is important to consider the operational state of the plant when evaluating loop performance. For
example when a plant has been shut down, many loops will be in manual control mode and most of the
remaining controllers will have their outputs saturated at 0 or 100 percent. These control loops should
not be flagged as problem loops while the plant remains shut down.
Loop Monitoring Software
Several control performance monitoring software packages are available from many vendors including
ABB, AspenTech, Control Arts, ControlSoft, Control Station, Emerson, ExperTune, Honeywell / Matrikon,
PAS, and RoviSys. These packages help to identify problem loops which can then be addressed to
minimize the impact on production.
Summary
Good control loop performance is essential for running a process economically. Because many control
loops have never been tuned properly, while the performance of others has decreased over time, control
loop monitoring has become an important function at most industry-leading and many other plants.
Monitoring the performance of control loops is not difficult. Basic algorithms for assessing loop
performance can be programmed in a process historian, or full-featured software can be acquired to
simplify the task.
For more information, or to get a complete assessment of the performance of your control loops, or
for training on this topic and others, please contact OptiControls.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Control Valve Problems


February 22, 2010

Control valve

Control valve problems can severely affect control loop performance and, unless eliminated, they can
make controller tuning a challenging (sometimes impossible) task. Some problems are quite obvious to
the trained eye and can easily be detected by loop performance assessment software. Others can be
more difficult to detect without running specific tests. When doing on-site services, I always make sure
to watch out and/or test for valve problems.
Four problems with control valves are found at a high frequency in poorly performing control loops.
These are:
- Dead band
- Stiction
- Positioner overshoot
- Incorrect valve sizing
- Nonlinear flow characteristic
Lets take a closer look at each of these problems.
Dead band
A valve with dead band acts like there is some backlash between the controller output and the actual
valve position. Every time the controller output changes direction, the dead band has to be traversed
before the valve physically starts moving. Although dead band may be caused by mechanical backlash
(looseness or play in mechanical linkages), it can also be caused by excessive friction in the valve, an
undersized actuator, or a defective positioner.
Many people use the term Hysteresis instead of dead band (I used to be one of them). But
theISA and Wikipedia define hysteresis as something else. The ISA clearly calls the mechanical
backlash phenomenon in control valves dead band.

A control valve with dead band will cause oscillations in a level loop under PI or PID control if the
controller directly drives the control valve (non-cascade). A control valve with dead band can also cause
oscillations after a set point change in control loops on self-regulating processes especially if the
integral action of the controller is a little excessive.
Stiction
Another very common problem found in control loops is stiction. This is short for Static Friction, and
means that the valve internals are sticky.

If a valve with stiction stops moving, it tends to stick in that position. Then additional force is required to
overcome the stiction. The controller continues to change its output while the valve continues to stick in
position. Additional pressure mounts in the actuator. If enough pressure builds up to overcome the static
friction, the valve breaks free. The valve movement quickly absorbs the excess in pressure, and often
the valve overshoots its target position. After this, the valve movement stops and the valve sticks in the
new position.

Frequently, this overshoot in valve position causes the process to overshoot its set point. Then the valve
sticks at the new position, the controller output reverses direction and the whole process repeats in the
opposite direction. This causes an oscillation, called a stick-slip cycle. If loop oscillations are caused by
stiction, the controller outputs cycle often resembles a saw-tooth wave, while the process variable may
look like a square wave or an irregular sine wave.

Stiction might be caused by an over-tight valve stem seal, by sticky valve internals, by an undersized
actuator, or a sticky positioner.
Positioner Overshoot
One control valve problem that is more common now than a decade ago, is that of positioner overshoot.
Positioners are fast feedback controllers that measure the valve stem position and manipulate the valve
actuator until the desired valve position is achieved. Most positioners can be tuned. Some are tuned too
aggressively for the valve they are controlling. This causes the valve to overshoot its target position after
a change in controller output. Sometimes the positioner is simply defective in a way that causes
overshoot. If the process controller is also tuned aggressively, the combination with positioner overshoot
can cause severe oscillations in the control loop.

Valve Sizing
The fourth common problem with control valves are oversized valves. Valves should be sized so that full
flow is obtained at about 70%-90% of travel, depending on the valve characteristic curve and the service

conditions. In most cases, however, control valves are sized too large for the flow rates they need to
control. This leads to the valve operating at small openings even at full flow conditions. A small changes
in valve position has a large effect on flow. This leads to poor control performance because any valve
positioning errors, like stiction and dead band, are greatly amplified by the oversized valve.
Nonlinearity
A valve with a nonlinear flow characteristic can also lead to tuning problems. A control valves flow
characteristic is the relationship between the valve position and the flow rate through the valve under
normal service conditions. Ideally the flow characteristic should be linear. With a nonlinear
characteristic, one can have optimal controller response only at one operating point. The loop could
become quite unstable or sluggish as the valve position moves away from this operating point.

Conclusion
Before attempting to tune a control loop, check the valve for dead band, stiction and nonlinearity and
have all problems attended to. This could save hours of effort tuning a loop in which the control valve is
actually the item needing attention.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 5. Control Valves, 6. Loop Performance, Problems, and Diagnostics

2 Responses to Control Valve Problems

mandar:
July 30, 2011 at 9:47 am
Hi, what will happen if we u 4 solenoid valve,4 separate regulators with range 02,0.4,0.6 and
0.8 and 1kg and 4 controller input instead of 1 positioner to change position of valves.

Jacques:
July 30, 2011 at 10:40 am
With the solenoid setup you describe, the system will come close to set point, but at minimum
the 0.2 kg solenoid will keep on opening and closing, causing an oscillation around the set
point.
- If this is a problem, you can implement a dead band (slightly larger than 0.2 kg) in the
control logic. The system may not be on set point, but it will not oscillate.
- If you need to run the system at set point with no oscillations, you should use a control valve
with an actuator (not a solenoid) so the contoller can change the valve position by tiny
fractions required to keep the process variable to set point.
Good luck!
Jacques

Diagnosing and Solving Control Problems


April 30, 2011

While many control loops are easy to tune and present almost no control problems, a few control loops
can be very problematic and never seem to control right. Control practitioners can spend many hours or
even days trying to improve the performance of these challenging control loops, but the results often
remain unsatisfactory. This article presents strategies for diagnosing control problems and improving the
performance of challenging control loops.

Symptoms of Poor Loop Performance


Although poor control performance come in many forms, it can be grouped into three categories:

Oscillations and instability the loop tends to cycle around its set point.

Large deviations from set point the loop struggles to remain at set point and the process
variable is frequently pushed away from set point.

Sluggish performance the loop takes too long to get to its set point after a disturbance or set
point change.

A Control Loop with Several Problems

Ive seen many cases where attempts to address poor performance were limited to controller tuning,
because the person attending to the problem did not know of all the other causes of poor performance.
To properly address and improve control loop performance, it is necessary to establish what the real
cause of the poor performance is, and then to take the appropriate corrective action.

Fault Diagnosis
To guide your diagnosis efforts, a fault diagnosis tree is provided below. The first level of diagnosis is the
three symptoms of poor control listed above. Depending on which of these symptoms your control loop
displays, you can find the possible causes below each symptom. These are described in more depth
throughout this article.

Diagnosing Control Problems

1. Oscillations
Oscillations can originate from within the control loop or be caused by external factors. To find out which
is the case, place the controller in manual and see if the oscillation stops. If it does, the oscillation is
generated from within the loop.

Oscillations Stopping when Controller is Placed in Manual

Internal Oscillations
Oscillations generated internally can be caused by faulty equipment or by tuning. Check first for faulty
equipment, because you can spend a long time trying to tune a loop if the real cause of poor
performance is the control valve.
The most common control valve problems causing oscillations are:

Control valve stiction. Do a stiction test with the controller in manual to determine if this is the
case.

Positioner overshoot. Do step tests of various sizes and be on the lookout for signs of
overshoot in the process variable.

Both of these control valve problems and cannot be fixed by tuning the process controller. The valve
needs maintenance or the positioner needs tuning.

Tuning
A loop that is tuned too aggressively (overly fast response) can quickly develop oscillations. Do step
tests on the process and determine the dominant process characteristics (gain, dead time, lag). Do
more than one step test (try to do four at least) in different directions. Then use tuning rules to calculate
new controller settings. If you are using rules designed to producequarter-amplitude damping, use only
half of the recommended controller gain. If you have tuning software, then use it to analyze the step-test
data and calculate new controller settings.

Nonlinear valve Characteristic


Many control valves control flow differently, depending on how far they are open. The valve is said to
have a nonlinear installed characteristic. If tuning is done at the one end, the settings might not work at
the other end, and could cause oscillations or sluggish behavior. If this is the case, a function generator
(X-Y curve) can be placed in the path of the controller output to cancel out the control valve nonlinearity.

Nonlinear Process
Some processes react differently based on operating point, production rate, or the product being made.
If these differences are large the loop can begin oscillating or become sluggish. Then different tuning
settings are required for the various operating conditions. This is called gain scheduling.

External Oscillations
Externally sourced oscillations can be caused by interactions between loops with the same dynamics or
simply by another loop in the process oscillating and causing several other loops to oscillate with it.

Coupled Interaction
Interactions between loops with the same dynamics can cause the two loops to fight each other. A
simple example of this is if two valves control the flow and pressure in the same pipe. Because the

dynamics of liquid pressure control loops and flow control loops are similar, the two controllers might be
tuned very similarly, causing the hunting between the two loops. To solve this, the most important loop
needs to be tuned for fast response, and the loop of secondary importance needs to be tuned
significantly slower (three times or longer settling time).

Dynamically Coupled Control Loops

Process Interaction
One loop in the process could be oscillating, causing several other loops in the same process to
oscillate with it. Use a process and instrumentation diagram (P&ID) to locate possible offenders. Then
use historical process trends of these other loops to find the oscillating loop. Several software vendors
like ExperTune, PAS, and Matrikon/Honeywell have products to help with locating the offending loop in a
plant-wide oscillation scenario.

2. Sluggishness
The next category of poor control loop performance is sluggishness. Sluggish control loop response can
be caused by equipment problems or by poor tuning.

Control Valve Dead Band


Dead band (also called hysteresis), can cause a loop to exhibit sluggish behavior. Every time the
process variable undergoes a disturbance in a different direction from the previous disturbance, the
controller output has to traverse the dead band before the valve begins moving. Dead band can be
detected very reliably through simple process tests. It is a mechanical problem and cannot be
addressed with tuning.

Other Equipment Problems


A control loop may also appear to have sluggish response if the controller output becomes saturated at
its upper or lower limit. Similarly, if the process variable runs into limits, the control action effectively
ends. Also, if the controller output has a rate-of-change limit, it may cause sluggish response, regardless
of how well the controller is tuned.

Tuning
Comments made earlier about tuning apply here too. Furthermore, realize that loops have internal
speed limits depending mostly on the length of the dead time in the process. It will take a well-tuned
loop three to four times the dead time to get back to its set point after a disturbance or set point change.
If disturbances cause large deviations from set point, and tuning is unable to correct it fast enough, see
the next section.

Upper Limit to Loop Speed any faster tuning will cause larger oscillations

3. Disturbances
The third category of poor loop performance is that of disturbances pushing the process variable away
from its set point. Disturbances are frequently the nemesis of good loop performance. As described
above, feedback control is limited in how fast it can eliminate the effects of a disturbance and bring the
process back to set point. Two classes of disturbances exist, depending on how they enter the loop.

Control-Flow Disturbances
Control-flow disturbances affect the loop by changing the flow rate through the final control element. For
example, if steam is used to heat the process flowing through a heat exchanger, and the pressure of the
steam decreases, the steam flow rate will be affected and this will disturb the outlet temperature.
Cascade control can be used very effectively to virtually eliminate the effects of a control-flow
disturbance. The outer loop controls the main process variable (temperature in this case) by changing
the set point for flow to an inner loop. The inner loop measures and controls the actual flow rate and
immediately corrects any deviations from set point.

Cascade Control for Handling Control-Flow Disturbances

Process Disturbances
In contrast to control-flow disturbances, all other disturbances to the process that affect the process
variable are simply called process disturbances. If a process disturbance is measurable, and its effect
on the process variable is known, feedforward control can be used to vastly reduce its impact.

Feedforward Control for Handling Process Disturbances

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 6. Loop Performance, Problems, and Diagnostics, 9. Tips and Work-Process

6 Responses to Diagnosing and Solving Control


Problems

Jam:
January 28, 2013 at 3:52 pm
Hello,
I have a flow control valve that operates OK in automatic. This valve only really operates
between 25-40%. However when the valve tries to maintain 25%, it keeps osciallating with the
around the setpoint. What could be causing this? Here are my theories:
1. Valve may have a non-linear characteristic and that may need to be placed on the output of
the PID I have yet to confirm this
2. Could the valve be oversized?
Are these correct and are there other factors that could be attributing to this?
Thanks

Jacques:
January 29, 2013 at 6:32 pm
Jam,
1. It could be that the valve has a quick-opening (nonlinear) type of flow characteristic. Is it a
butterfly valve? If so, see this article:
It could also be that the valve is sticky around 25% see this article:
2. It seems that your valve is a bit oversized control valves should ideally operate around
75% 85% open at design flow rates.
- Jacques

William Love:
February 19, 2014 at 8:35 pm
In an article or post that I cant find right now I recalled the author said you can prevent
excessive valve wear by not letting the valve move unless the PID output changed by more
than some amount.This idea was greeted with derision in a group discussion, so Im trying to
figure out whether the idea has any merit.

Jacques:

February 19, 2014 at 9:16 pm


As with many things in process control it depends. If you have a noisy measurement signal,
and a high-gain controller, your controller output will likely move around much more than you
like and wear out the valve prematurely. You have a few options, depending on your
controllers features.
1. You can filter the process variable (my preferred option), but realize that this adds
additional dynamics to the loop that requires slower tuning which slows down the loop
response. If your process is already a slow-responding process, the additional dynamics may
go virtually unnoticed, making it a very good solution. However, if you have very lowfrequency noise, you need a very long filter time-constant, which can make this an infeasible
solution. (Your filter time constant should be substantially shorter than your process dead time
and lag to achieve fast loop response).
2. You can also set a deadband around the process variable or controller output, which brings
me to your question. If apply a deadband, the controller will move the control valve only when
the limit of the deadband has been reached. The wider the dead band, the less often your
controller will move the valve (and make smaller movements). If you set dead band larger
than the noise, the controller does not respond until there has been a significant change in the
process. This effectively adds pseudo-deadtime to the loop, making the control loop behave
very sluggishly. So,, for fast loop performance, just as the lag in a measurement filter should
be set much shorter than your process dynamics, you should ensure that the additional
deadtime added by a deadband is also much shorter than your process dynamics.
3. You can also look at using a different measurement technology, if the problem is
measurement noise.
Also, make sure you are not using derivative control on a noisy measurement signal, and if
you have to use it, consider lengthening the controller scan time to reduce the gradients the
derivative term sees.

William Love:
February 20, 2014 at 1:13 pm
Just to clarify, the method Im describing involves ignoring a change in the PID output
(CV)unless it is more than some amount. If the CV = 45.0 %, I would hold that number in a
register and keep sending that to the valve. Id not change that signal going to the valve until
say CV 45.5% (my deadband is 0.5%). So if I saw the PID output reach 45.6%, Id update the
hold register and start sending that to the valve. Then, until CV 46.1% I would keep sending
the hold register value 45.6% to the valve. Is this what you thought I was saying?
One person opined this is equivalent to putting a deadband on the error between PV and SP
(which in Rockwell is implemented with a parameter called CV Zero Crossing Deadband.)
But Im not so sure.
Is my proposal to keep the valve at the hold value until the CV changes by more than a
deadband have a history in the field. I think I got the idea from a post by Greg McMillan and
thought it sounded good. To your knowledge has this been done?

Jacques:
February 20, 2014 at 6:33 pm
William, thanks for clarifying what you mean.
First, the 0.5% deadband seems far too small for reducing excessive valve wear.
Second, the method you propose artificially introduces the equivalent of stiction, which is very
bad for stability.
I dont know if it has been done, but I will advise against it.

Q&A on Loop Performance


February 12, 2010

I was recently asked for my input on control loop performance by means of a few questions. Here is a
somewhat modified version of the questions and answers.
Q: What is the percentage of control loops that operate properly in the average process plant?
A: Two papers dating back to 1993, authored individually by Ender [1] and Bialkowski [2], claimed that
roughly 30% of loops were left in manual, 30% actually increased variability, and that only 20% of loops
performed well. In 2001 Desborough and Miller [3, 4] confirmed that control loop performance was on
average still the same. In 2008, VanDoren [5] also reported a very similar bleak state of loop
performance.
Our experience from loop performance audits supports these findings, although the distribution between
loops in manual, those with poor performance, and those with good performance, can vary significantly
between different sites and process types.
Q: When does a loop operate properly?
A: Proper operation can be subjective, and it is easier to define improper operation. Loops that increase
variability when running in auto compared to manual control would not be considered as operating
properly. Nor would controllers reacting mainly to noise and high-frequency disturbances, or those
causing oscillations in the loop. Controllers with outputs running into limits, and controllers tuned too
sluggishly are also not operating properly. So, for a loop to be operating properly, it needs to reduce
variability, and do so in a repeatable fashion, consistent with its function in the larger process.
Q: What can cause control loops to operate poorly?
A: Improper operation can be caused by incorrect controller tuning settings, incorrect process variable
(PV) filter settings, faulty or incorrectly positioned instrumentation, or mechanically defective, non-linear,
oversized, or undersized final control elements. It can also be caused by not making use of an
appropriate control strategy (like feedforward or gain scheduling), or improper design of such a control
strategy (like dividing one flow by another for doing ratio control).
Q: What are the consequences poor loop performance?
A: A poorly performing control loop can decrease product quality, limit maximum production rates,
extend process start-up and transition times, increase the likelihood of unplanned process shut-downs,
increase maintenance costs, consume more energy, and make the process difficult to operate.
Q How should control loop problems be resolved?
A: The first step is to find the problem loops. Some problematic control loops are obvious due to their
impacts on operations, while others might be less obvious or remain hidden to operators and process
engineers. A comprehensive list of control problems can be obtained most effectively by using software
to assess the performance of all control loops. The second step is to distill the performance survey
down to a list of bad actors by looking both at the performance and the relative importance of each
control loop. Bad actors would be loops performing poorly and also significantly impact the process.
Loop assessment software can do this automatically if they are configured properly. The third step is to
diagnose the root cause of the problems. Loop assessment software is helpful in doing this, but
engineering and process knowledge is also required. The final step is fixing the problem according to
the diagnosis, for example by tuning a controller, fixing a control valve, or implementing the appropriate
control strategy.
Q: Why are some control loop problems so persistent?
A: Control problems persist mostly because the root causes of the problems are not being addressed.
For example: An engineer may spend hours tuning and re-tuning a control loop, but his efforts are futile

if the problem actually is a sticky control valve requiring maintenance, or a nonlinear process requiring
gain scheduling.
Q: What is the most important factor when optimizing control loops?
A: The single biggest factor in loop optimization is the skill level of the person doing the optimization
work. Software has come a long way to simplify loop performance analysis and tuning, but the software
is just a tool, and the validation of its diagnoses and execution of corrective actions need to be done by
an adequately skilled human. Skills can either be hired, or learned. In both cases OptiControls can help
with our services and training.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

References:
[1] D.B. Ender, Process Control Performance: Not as Good as You Think, Control Engineering 40 (10),
1993, 173 186.
[2] W.L. Bialkowski, Dreams Versus Reality: A View From Both Sides of the Gap, Pulp & Paper Canada,
1993, 94 (11), 19 27.
[3] L. Desborough and R. Miller, Increasing Customer Value of Industrial Control Performance
Monitoring: Honeywells Experience, Proc. 6th Int. Conf. on Chemical Process Control (CPC VI),
Arizona, USA, 2001, 172192.
[4] L. Desborough, P. Nordh, R. Miller, Control System Process out of Control, Industrial Computing,
August 2001, 52 55.
[5] V. VanDoren, Advances in Control Loop Optimization, Control Engineering, March 2008, 48 52.

Valve Diagnostics on a Level Loop


April 10, 2012

Determining the condition of the control valve on a level loop can be challenging, but it is an important
aspect of successful tuning. Control loop performance can be greatly affected bycontrol valve dead
band, stiction, and nonlinear flow curve, and this is no different on a level control loop. If the level
controller directly drives the control valve, both dead band and stiction will cause a level control loop to
oscillate continuously.

A level control loop oscillating because of control valve dead band.

Doing valve diagnostic tests is easy on a self-regulating loop, but not so on an integrating loop, like
liquid level. On a level loop the process variable itself does not in any way reflect the actual control valve
position (or the flow into/out of the vessel for that matter). So how then does one determine the
condition of a control valve in a level loop?
The answer is to analyze the rate of change of the level at different control valve positions. For example,
if you want to check for control valve dead band, you would put the controller in manual, make two
controller output steps (typically 5% in size) in one direction, and make another step in the opposite
direction just like on a flow control loop, for example. However, instead of using the level
measurement directly, you would analyze the rate of change of the measurement.

A dead-band test on a level loop. Normally the controller output steps would be equal in size, but very often
smaller steps are required to keep the level within limits.

This takes some planning because the level always needs to be kept within a safe operating range and
you need to allow enough time between controller output steps to obtain steady ramps in level that are
long enough that you can take measurements from them. I often have to adapt my test plan to keep the
process safe but still obtain the data I need for analysis. Dont worry if your steps are not all the same
size, the calculation below will compensate for this.
Once you have collected your test data, return command of the controller back to the operator to be
placed in auto and/or monitored. Import the controller output (CO) and process variable (PV) data into
Microsoft Excel or your spreadsheet of choice.
Add a new column of calculations that take the difference (dPV) between two successive PV samples,
e.g.: C2 = B3 B2, assuming your PV data is in Column B and the dPV calculations are in Column C.
You will likely find that the new dPV data is very noisy. In that case you should include an averaging filter
in your calculation like this:

Calculating a filtered rate of change.

Once youre done you can plot the data and take measurements from the plot or the data. Use the
average of the dPV values as shown below.

Analyzing dead band on a level loop.

You can then calculate the dead band in the valve, as a percentage of its full travel (0 100% open):
% Valve Dead Band = (CO3 CO2) (dPV3 dPV2) / (dPV2 dPV1) x (CO2 CO1)

A stiction analysis can be done in the same way, but you will make 5 to 10 small changes in controller
output (typically 0.5% in size). Remember to leave enough time between successive steps to obtain a
steady gradient. Make sure that you take up any dead band first by preceding the small steps with one
large dead-band-eliminating step in the same direction as you are planning for the small stiction steps.
Since a level loops will oscillate if its output drives a valve having dead band or stiction, you want these
to be as low as possible. Installing a flow controller as an inner loop to the level controller in a cascade
control arrangement will go a long way to reduce the effects of valve issues on a level control loop.
Stay Tuned!
Jacques Smuts, author of the book Process Control for Practitioners

7. Control Strategies
o
A Tutorial on Cascade Control
o
A Tutorial on Feedforward Control
o
Butterfly Valves and Control Performance
o
Caster Level Control Improvement
o
Control Valve Linearization
o
Drum Level Control
o
Improving pH Control
o
Ratio Control
o
Steam Temperature Control

A Tutorial on Cascade Control


March 15, 2010

What is Cascade Control?


In single-loop control, the controllers set point is set by an operator, and its output drives a final control
element. For example: a level controller driving a control valve to keep the level at its set point.

Single Loop Control

In a cascade control arrangement, there are two (or more) controllers of which one controllers output
drives the set point of another controller. For example: a level controller driving the set point of a flow
controller to keep the level at its set point. The flow controller, in turn, drives a control valve to match the
flow with the set point the level controller is requesting.

Cascade Control

The controller driving the set point (the level controller in the example above) is called the primary, outer,
or master controller. The controller receiving the set point (flow controller in the example) is called the
secondary, inner or slave controller.
What are the Advantages of Cascade Control?
There are several advantages of cascade control, and most of them boil down to isolating a slow control
loop from nonlinearities in the final control element. In the example above the relatively slow level
control loop is isolated from any control valve problems by having the fast flow control loop deal with
these problems.
Imagine that the control valve has a stiction problem (see blog on valve problems.) Without the flow
control loop, the level control loop (driving the sticky valve) will continuously oscillate in a stick-slip cycle
with a long (slow) period, which will quite likely affect the downstream process. With the fast flow control
loop in place, the sticky control valve will cause it to oscillate, but at a much shorter (faster) period due
to the inherent fast dynamic behavior of a well-tuned flow loop. It is likely that the fast oscillations will be
attenuated by the downstream process without having much of an adverse effect.
Or imagine that the control valve has a nonlinear flow characteristic (see blog on valve problems.) This
requires that the control loop driving it be detuned to maintain stability throughout the possible range of
flow rates. (Of course there are better ways to deal with nonlinearities, but that is the topic of another
blog.) If the level controller directly drives the valve, it must be detuned to maintain stability possibly
resulting in very poor level control. In a cascade control arrangement with a flow control loop driving the
valve, the flow loop will be detuned to maintain stability. This will result in relatively poor flow control, but
because the flow loop is dynamically so much faster than the level loop, the level control loop is hardly
affected.

When Should Cascade Control be Used?


Cascade control should always be used if you have a process with relatively slow dynamics (like level,
temperature, composition, humidity) and a liquid or gas flow, or some other relatively-fast process, has
to be manipulated to control the slow process. For example: changing cooling water flow rate to control
condenser pressure (vacuum), or changing steam flow rate to control heat exchanger outlet
temperature. In both cases, flow control loops should be used as inner loops in cascade arrangements.
Does Cascade Control Have any Disadvantages?
Cascade control has three disadvantages. One, it requires an additional measurement (usually flow
rate) to work. Two, there is an additional controller that has to be tuned. And three, the control strategy is
more complex for engineers and operators alike. These disadvantages have to be weighed up against
the benefits of the expected improvement in control to decide if cascade control should be implemented.
When Should Cascade Control Not be Used?
Cascade control is beneficial only if the dynamics of the inner loop are fast compared to those of the
outer loop. Cascade control should generally not be used if the inner loop is not at least three times
faster than the outer loop, because the improved performance may not justify the added complexity.
In addition to the diminished benefits of cascade control when the inner loop is not significantly faster
than the outer loop, there is also a risk of interaction between the two loops that could result in instability
especially if the inner loop is tuned very aggressively.
How Should Cascade Controls be Tuned?
A cascade arrangement should be tuned starting with the innermost loop. Once that one is tuned, it is
placed in cascade control, or external set point mode, and then the loop driving its set point is tuned. Do
not use quarter-amplitude-damping tuning rules (such as the unmodified Ziegler-Nichols and CohenCoon rules) to tune control loops in a cascade structure because it can cause instability if the process
dynamics of the inner and outer loops are similar.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 7. Control Strategies

11 Responses to A Tutorial on Cascade Control

lndas:
October 18, 2011 at 10:32 am
Excellent treatment on the subject. I could get an idea about cascade control as practiced in
industries.Thanks for the article.

alex:
December 19, 2011 at 4:34 pm
really very useful. thank you.

DF:
August 9, 2012 at 7:25 am
Excellent overview, Ive liked the site so much Ive bought the book
Thanks!

Jan:
September 25, 2012 at 8:25 am

I like your articles. Simple, short and useful. Thank you!


Is there an article on detuning controllers? Ive hear the term a lot but it is not completely clear
to me what it actually means and when it should be done (or not)

Jacques:
September 25, 2012 at 2:40 pm
Jan, I have made a note to write my next article about detuning controllers. Thanks for the
suggestion.

Shweta Garg:
October 9, 2012 at 5:49 pm
A simple and clear explanation with all the necessary and important points mentioned.
Thank you Sir for such an effort u made for others
Looking for new articles!

Jeff:
October 11, 2012 at 10:58 pm
Thanks for such a wonderful summary of cascade control!

frank:
December 17, 2012 at 3:56 pm
I have a cascade loop that seems to favor the low side of my setpoint. The process will edge
up slowly and achieve a small but acceptable overshoot and then the output will plummet
down causing a large overshoot, only to have the process repeat itself again. It does not
appear to be an oscillatory process as it controls slowly under the setpoint but on the other
hand it reacts extremely fast when the pv is above the setpoint. It appears that the master
controller is the culprit in my situation. I fear that if I lower my gain any more, it will take the
process too long to reach its setpoint from below. Have any ideas on why this might be
occuring/ how to fix?
Thanks,
Frank

Jacques:
January 31, 2013 at 9:02 am
Frank, from your description it seems that you might have too much gain and too little integral
action in your controller. I often see this on flow loops that have been tuned using trial and
error, but it occurs on other loops too. Please take a look at this article for a good method to
tune your controller: http://blog.opticontrols.com/archives/383
- Jacques

Jarret:
February 26, 2013 at 6:58 pm
Hi, I just wanted to say thank you for putting this stuff in plain English. Im training for a career
in process operations and these blogs are excellent. Thank you sir.

Rocketman:
July 11, 2013 at 10:49 am
The course I had in college left me scratching my head with how to implement it; thanks for
bring reality into the subject.

A Tutorial on Feedforward Control


January 17, 2011

Feedforward control can be used very successfully to improve a control loops response to disturbances.
Feedforward control reacts the moment a disturbance occurs, without having to wait for a deviation in
process variable. If any process control loop is subject to large, measurable disturbances, it can benefit
greatly from feedforward control.

Feedforward control reducing effects of a disturbance

To understand feedforward control, lets first review feedback control.

Feedback Control
Feedback control is typically done with PID (proportional + integral + derivative) controllers. The process
variable of interest is measured and the controllers output is calculated based on the process variable
and its set point. Although external disturbances often affect the process variable, they are not used
directly for control. Instead, if a disturbance affects the process variable, the control action is based on
the process variable and not the disturbance.
As an example, the outlet temperature of a heat exchanger can be measured and used for feedback
control. The feedback controller will manipulate the steam flow to the heat exchanger and keep the
outlet temperature as close to set point as possible.

Feedback Control

Feedback Control and Disturbances


Many process control loops are affected by large disturbances. Feedback control can act only on the
result of a disturbance, which means feedback control cannot do anything until the process variable has
been affected by the disturbance.
In the example of the heat exchanger above, changes in process flow rate will be a major source of
disturbances to the outlet temperature. If the process flow rate through the heater is increased, the
original steam flow rate will not be enough to heat up the increased amount of process liquid and the
outlet temperature will decrease. Feedback control will eventually increase the steam flow rate and bring

the outlet temperature back to its set point, but not until there has been a significant deviation in
temperature.

Feedforward Control
In contrast to feedback control, feedforward control acts the moment a disturbance occurs, without
having to wait for a deviation in process variable. This enables a feedforward controller to quickly and
directly cancel out the effect of a disturbance. To do this, a feedforward controller produces its control
action based on a measurement of the disturbance.
When used, feedforward control is almost always implemented as an add-on to feedback control. The
feedforward controller takes care of the major disturbance, and the feedback controller takes care of
everything else that might cause the process variable to deviate from its set point.

Feedforward + Feedback Control

In our example of the heat exchanger, in which the major disturbances come from changes in process
flow rate, the latter can be measured and used for adjusting the steam flow rate proportionally. This is
done by the feedforward controller.

Implementing Feedforward Control


Many PID controllers have an external connection for adding an input from a feedforward controller.
Otherwise the output of the feedforward controller can be externally added to the output of the feedback
controller. Review your controller documentation and take special care with scaling the feedforward
signal. Many PID controllers expect the feedforward signal to be scaled between -100% and +100%.
Feedforward and feedback control is often combined with cascade control, to ensure that their control
actions manipulate the physical process linearly, eliminating control valve nonlinearities and mechanical
problems.
If several major disturbances exist, a feedforward controller can be implemented for each of them. The
outputs of all the feedforward controllers can be added together to produce one final feedforward signal.
Only consider disturbances that meet these criteria:

Measurable if it cant be measured you cant control from it

Predictable effect on the process variable most disturbances will fall in this class

Occur so rapidly that the feedback control cannot deal with them as they happen.

Feedforward Controller Design and Tuning


A feedforward controller essentially consists of a lead-lag function with an adjustable gain. A dead-time
function (Ttd) can be added if the effect of the disturbance has a long time delay while the control action
is much more immediate.

Feedforward controller design

The feedforward gain (Kff) is set to obtain the required control action for a given disturbance. For
example, it controls the ratio of steam flow to process flow in the example used previously. The lead and
lag time constants are set to get the right timing for the control action. The feedforwards lead (Tld) will
speed up control action should be set equal to the process lag between the controller output and the
process variable. The feedforwards lag (Tlg) will slow down the control action and should be set equal
to the process lag between the disturbance and the process variable.
You can use an alternative design for a feedforward controller that makes tuning easy. This is to simply
use a function generator as the feedforward controller. Before implementing the feedforward controller,
take note of the feedback controllers output and the disturbance measurement at various levels of the
disturbance. Use this relationship to set up the curve in the function generator.

Simplified feedforward controller design

For the heat exchanger example, we should tabulate the temperature controllers output and process
flow rates under various steady-state production rates. Then we program a curve in the function
generator to produce the desired controller output at each of the process flow rates we measured.
Let me know if you have a control problem you need help with, or if you are interested in process control
training (contact info).

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 7. Control Strategies

6 Responses to A Tutorial on Feedforward Control

Noel Guiriba:
September 26, 2012 at 6:24 pm
Hi. Im dealing with a process that involves a heated bath where product is dipped for about
100 minutes. We control the temperature of the bath to about 190F. When product is initially
immersed in the bath, the bath temperature drops 3F for the first 15 minutes and gradually
rises to the set temperature, about 45 minutes after the product was initially loaded. When the
set temperature is reached, temperature is kept at +/-0.1F of setpoint. We currently use a
PID temperature controller and I think a feedforward controller will help us a lot. I was thinking
of a function generator that is triggered by a signal of the product coming into the bath. Id
appreciate any comment or suggestions you may have. Thanks!

Jacques:
September 27, 2012 at 9:26 am
Noel, it sounds like your process might benefit from using feedforward, however use this only
if the feedback controller has been tuned optimally and the temperature still deviates
excessively. I dont know the size of your process, but it sounds like the loop is responding
very slow. See this link for a recommended tuning
method:http://blog.opticontrols.com/archives/383
If you do end up using a feedforward, I think you should consider using a lead/lag with a
dominant lead. It will give you a large initial increase in controller output that will die away
over time. Contact me if you need more help with this:http://www.opticontrols.com/contact-us
- Jacques

dhruv prajapati:
December 17, 2013 at 2:06 am
sir i wanted to know that how the feedforward and cascade control scheme can be
combinedpls give some idea on it. thank you.

Jacques:
December 17, 2013 at 6:27 pm
Dhruv, you can take a look at three-element drum level control at this
link:http://blog.opticontrols.com/archives/165. This is feedforward and cascade control
combined.

Andrew:
June 10, 2014 at 3:07 am
Hi. I would like to implement feedforward control on boiler outlet steam temperature.
We have a base loaded chemical recovery boiler. To manage the variable steam loads in our
pulp mill we have a single wood waste boiler whos boiler master is driven by the main site
650psi header pressure. Our wood waste boiler outlet steam flow is quite variable depending
on our users. The steam passes through a primary and secondary superheater. Between the
primary and secondary superheaters we add feed water through a de-superheater station to
control the secondary superheater outlet temperature. Currently there is only feedback control
for this temperature control. Normal operation can have a dead time through the secondary
superheater of about 20 to 60 seconds but could be outside this range also. The feedback
control struggles to control temperature and is tuned aggressively to compensate for this
dead time.
I wish to add feed forward control to this loop. I have no intermediate temperature
measurement but we do measure outlet steam flow. I have the following questions and
comments regarding the modelling and implementation of feedforward using the outlet steam
flow.
1. What is the best way to determine the model of the steam flow influence on outlet steam
flow? This is particularly tricky given that the steam demand on the wood waste boiler is so
variable.
2. Will the feedforward model and therefore the feedforward signal to the outlet steam
temperature control become the main control action?
3. How accurate does the feedforward model for this application need to be?
4. Once the feedforward control is implemented does the feedback control only become a trim
for non-measured variations in the process that affect the outlet steam temperature?
5. Once feedforward control is implemented how should the tuning parameters for the
feedback control be determined?
Please find a link to the P&ID part for this below.
https://docs.google.com/file/d/0B3A30WvckCixT0hCOUZ4RFIyV1k/edit?usp=docslist_api

Jacques:
June 29, 2014 at 12:57 pm
Andrew,
1. You should plot the steam temperature controllers output against steam flow rates. If you
get a nice line or curve, feedforward will likely work well. If you get a lot of scatter, dont bother
using feedforward. Assuming there is a good correlation, make the fastest possible step
change in steam flow with the temperature controller in manual to determine the dynamics
(dead time and lag) between changes in steam flow and temperature. You also need to do
this for your steam temperature controller. The feedforward should have a lead lag comprising
both systems dynamics. I describe how to do this in detail in my book, Process Control for
Practitioners.

2. It depends if the line you get in the previous step has a 0,0 origin (when extrapolated). But I
doubt that it will.
3. The improvement in control is directly proportional to the accuracy of the feedforward. In
other words, if your model is only half accurate, youll get only half the potential improvement.
4. Yes.
5. If the feedforward becomes the main control action, you will likely need to back down the
temperature controllers gain. This is best determined by looking for signs of overcorrection
during testing with the feedforward in place.

Butterfly Valves and Control Performance


February 23, 2012

Because butterfly valves cost less than real control valves like globe valves or characterized ball
valves, they are sometimes used in place of control valves to save money. This decision is often costly
in the long term because of the poor control performance resulting from butterfly valves.
Late last year I optimized several control loops at a mid-sized manufacturer of specialty
chemicals. Similar to most plants I have worked at, I found a number of control loops that were
oscillating. Many of them oscillated because of valve stiction, incorrect controller settings, or process
interactions. One of the loops, a distillation column level control loop, oscillated as a result of using a
butterfly valve as the final control element.

Figure 1. Oscillating level control loop.

To perform well, a PID control loop needs (among other things) that the process gain remains constant.
In other words, the process variable must change linearly with changes in controller output. A small
degree of nonlinearity can be tolerated, especially if we apply robust tuning methods, but if the process
gain changes by more than a factor of 2, we can expect control problems. And this is why a butterfly
valve makes a poor choice for a control valve it has a highly nonlinear, S-shaped flow curve, as shown
in Figure 2.

Figure 2. Typical butterfly valve flow characteristic.

Figure 3 shows how the gain of a typical butterfly valve changes from less than 0.2 to almost 3 over the
span of the controller output. The process gain varies by a factor of 15! This large variation in process
gain makes it impossible to have consistently good control at all valve positions.

Figure 3. Typical butterfly valve gain.

At the chemical company the butterfly valve was used to control the bottom level of a distillation column.
The distillation column was the last one in a train of three columns, of which each column had
a progressively smaller diameter. Moderate increases in feed rate to the first column easily caused highlevel alarms when they propagated to the small final column. The level controller originally seemed to be
responding too slowly to handle these upsets, so the loop tuner increased the controller gain to achieve
fast response at high flow rates. However, at normal flow rates, where the process gain was 15 times
higher, the loop was unstable and oscillated continuously as shown in Figure 1.
The correct solution to this problem would have been to replace the butterfly valve with a control valve
that has a linear flow characteristic and then retune the control loop. However, this could only be done
during the plants annual maintenance shutdown. In the mean time we installed a characterizer to
linearize the butterfly valve (Figure 4). The characterizer compensated for the butterfly valves
nonlinearity and made the flow through the valve follow the controller output in a reasonably linear
fashion.

Figure 4. Level control loop with characterizer.

With the characterizer in place we retuned the controller. After this the oscillations stopped and the loop
performed much better than it did before. However, the control performance was still not as good as
what a linear control valve would have provided. The real solution to the problem remained replacing the
butterfly valve with control valve, but this had to wait for the next maintenance shutdown.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 5. Control Valves, 6. Loop Performance, Problems, and Diagnostics, 7. Control Strategies,8.
Case Studies

3 Responses to Butterfly Valves and Control


Performance

mohamed elsadig:
May 18, 2013 at 11:11 pm
Dear Jacques,
would you please the implementation of the Characterizer is in term of software (functional
block or hardware).
thank you
mohamed

Jacques:
May 19, 2013 at 7:59 am
Mohamed, in this case we implemented the characterizer in the DCS using a function block.
You could also do the characterization in the valve positioner if the positioner supports it (most
digital valve positioners do). My preference is to do it in the control system because if the
positioner is replaced, the new positioner might be put in service without the characterizer.

H.B.R:
May 27, 2014 at 2:16 am
Dear, Mr. Jacques.
Im an I&C engineer working for an EPC contractor. In the site where I worked last year, there
was the exactly same problem as described in the figure no.4 of this article.
The process was distillate water level and it was controlled by the butterfly type control valve.
The process value was hunted like figure no.1 but with bigger magnitude. I think the problem
was because of the actuator or positioner. When PID output sent a demand signal to the
valve, the positioner feedback value followed the demand signal after 1~2 seconds.(i,e. there
was a deadtime in the actuator.) I couldnt detect that it was caused by the sticky actuator or
positioner problem because the plant was under the commercial operation.(now, Im working
the head office and that problem is still remaining in the site.)
Can you imagine the status of the process?? Its oscillated with really big magnitude.(bigger
than figure no.1)
The action I did during the commissioning was only adjusting PID value at that time due to the
tight commissioning time. After that, only the oscillation magnitude became smaller and the
symptom alarming every minute disappeared but still big oscillation has existed.
Thank you for your helpful information.

Caster Level Control Improvement


January 31, 2013

Recently, I helped a foundry with a level control problem in their casting process. A batch of metal is
melted in a furnace, after which the furnace is slowly tilted to pour the metal into a trough above the
caster. The level of molten metal in the caster trough must be kept constant so that the metal flows into
the mould at a constant rate. This is done by manipulating the tilt rate of the furnace. The foundry had
problems maintaining a constant level in the caster trough. An investigation of the system and
equipment revealed the problem.
System Description
The level of the molten metal in the casting trough is measured with a non-contact level sensor and sent
to a PID controller. The controller compares the level to its setpoint and manipulates the valve that

controls the furnaces tilt rate (Figure 1). If the level is below setpoint, the PID controller opens the valve
more and the furnace tilts faster. Likewise, if the level is above setpoint, the valve position is reduced.

Figure 1. Caster Trough Level Control (click to enlarge)

The Problem
The tuning parameters of any PID controller should be set according to the gain and dynamics of the
process it is controlling. A control loop can tolerate small changes in process characteristics, but large
changes will cause poor control, unless the control design somehow compensates for this. And herein
lay the problem during the casting process the process gain changed vastly.
At the beginning of the cast, when the molten metal in the furnace has a large surface area, a 1 change
in tilt angle will pour a large quantity of metal into the caster trough. At the end of the cast, when the
furnace is tilted significantly and the molten metal has a small surface area, a 1 change in tilt angle will
pour only a small quantity of metal (Figure 2). This causes the process gain to change by a factor of
almost 10 during the casting process.

Figure 2. Origin of Process Nonlinearity

It is impossible to have good feedback control from a simple control loop if the process gain changes
this much. The loop performance will range from being close to instability (when the process gain is high
at low tilt angles early in the cast) to being very sluggish (when the process gain is low at high tilt angles
late in the cast). This is why the foundry had so much trouble with this control loop.
The Solution
The solution was to either use gain scheduling on the controller or to implement a linearizer between the
controller output and the process. Both of these would essentially keep the loop gain constant by either
changing the controller gain based on tilt angle, or by compensating for the nonlinear process gain at
different tilt angles. To simplify tuning, we chose the linearizer. The linearizer would multiply the
controller output by a certain factor that would be changed automatically, based on the furnaces tilt
position (Figure 3).

Figure 3. Level Control Improvement through Linearization

We used trigonometry to calculate the appropriate multiplier for different tilt positions and implemented
this into a function generator block in the control system. After this the loop was linear and the control
performance vastly improved.
When tuning control loops, it is always important to understand the process and its characteristics, and
how these characteristics might change in relation to the process conditions. A process control
practitioner should always look for the true reason of poor control. In many cases this goes far beyond
controller tuning.

Find out more about process nonlinearity, gain scheduling, controller tuning, and much more in my
book Process Control for Practitioners.
Stay tuned!

Jacques Smuts
Principal Consultant
OptiControls

Control Valve Linearization


November 26, 2011

A control valves flow characteristic is an X-Y curve that maps the percentage of flow youll get for any
given valve opening (Figure 1). The design characteristic (also called inherent flow characteristic) of a
valve assumes a constant pressure differential across the valve. More relevant to us is the installed
characteristic, which is the way the valve operates in the real process. The installed characteristic of a
valve can be determined by plotting the measured flow rate at different valve openings. You can do tests
on the live process to get this data, or you can get it from the process historian (make sure you use
steady-state data).

Figure 1 - A Nonlinear Flow Characteristic

The installed flow characteristic of a control valve directly affects the process gain. It is essential that the
installed characteristic is linear (the above plot is a straight line) so that the process gain is constant,
regardless of the controller output. If the gradient of the curve varies by more than a factor of two,
control loop performance will be noticeably affected. If nothing is done to linearize the valve the
controller will have to be detuned to accommodate the maximum process gain. This leads to sluggish
control loop response over much of the valves operating range.
A nonlinear flow characteristic should be linearized to obtain good control performance throughout the
valves operating range. This is done with a linearizer (also called a characterizer). The linearizer is a
control block, function generator, f(x) curve, or a lookup table, placed between the controller and the
valve (Figure 2). Although the linearization can be done in a digital positioner, the DCS/PLC is the best
location for it. This allows replacement of the positioner without having to reprogram the linearization
curve in the new positioner.

Figure 2 - Linearizing a Nonlinear Valve Characteristic

Linearization is done with an X-Y curve or function generator that is configured to represent the
reciprocal (inverse) of the control elements flow curve (Figure 3).

Figure 3 - How a Linearizer Works

To design the linearizer, you have to first determine the flow characteristic curve of the valve operating in
the actual process. For this you should take readings of the flow or process variable (PV) and controller
output (CO) under steady-state conditions at various controller output levels. You need a minimum of
three (PV, CO) data pairs for this, but four or five would be better for characterizing a nonlinear
relationship.

Make sure you span the entire operating range of the controller output, and try to obtain readings
spaced equally across the controller output span. You can do process tests to obtain these values, or
examine data from your process historian. Then convert the process variable data from engineering
units to a percentage of full scale of the measurement.
Sort the data pairs in ascending order, and enter them into a function generator. The PV readings in
percent become the X values (input side) and the CO readings become the Y values (output side).
Include a (0, 0) point if you dont already have one in your dataset and be sure to estimate a (100, Y)
point also if you dont have one. Also, if your valve opens as the CO decreases, your Y column will
obviously have to reflect this.
For example, you get the following (PV, CO) pairs form historical data: (120, 22); (280, 39); (530, 63).
The PV is ranged 0 to 1000 kg/hr. You plot the data and estimate that 1000 kg/hr will occur at about
85%. The characterizer will look like this:

After implementing a linearizer in the DCS or PLC, you can test its accuracy by checking whether the
controller output and flow measurement are roughly at the same percentage of full scale. For example:
20% and 50% controller output should result in roughly 20% and 50% flow rate. You should retune the
controller after implementing the linearizer because it likely had changed the process gain.
Although this discussion mentioned only control valves, the same applies to other final control elements,
like vanes, dampers, feeders, etc.

Stay tuned!

Jacques Smuts
Author of the book Process Control for Practitioners

Posted in 5. Control Valves, 7. Control Strategies

3 Responses to Control Valve Linearization

Jack:
April 29, 2012 at 1:58 pm
We currently have a non-linear characterization on one of our boiler air dampers. I didnt quite
understand the purpose of the characterization until I read this article. It was a good thing I did
too, because I almost removed the characterization. Thanks.

siby:
March 8, 2013 at 7:01 am
I have read about some situations where the control valves are deliberately chosen to have
non-linear behavior like an equi-percentage characteristic because the process it is controlling
is also non-linear. Will introducing a linearizer then adversely affect the loop performance?

Jacques:
March 8, 2013 at 7:19 am
Siby, if the nonlinear flow characteristic of the control valve cancels out the nonlinear
characteristic of the process, the combination of the two should be linear and no
characterization is required. For example, the steam flow control valve to a heat exchanger is
likely better being equal percentage than linear.

Drum Level Control


July 3, 2010

A very common control problem, and one used in many examples elsewhere, is that of controlling the
level in a boiler drum. Many industrial plants have boilers for generating process steam, and of course
boilers are central to thermal power generation.
The boiler drum is where water and steam are separated. Controlling its level is critical if the level
becomes too low, the boiler can run dry resulting in mechanical damage of the drum and boiler piping. If
the level becomes too high, water can be carried over into the steam pipework, possibly damaging
downstream equipment.
The design of the boiler drum level control strategy is normally described as single-element, twoelement, or three-element control. This article explains the three designs.
Single-element Control (Feedback Control)
One or more boiler feedwater pumps push water through one or more feedwater control valves into the
boiler drum. The water level in the drum is measured with a pressure and temperature-compensated
level transmitter. The drum level controller compares the drum level measurement to the set point and
modulates the feedwater control valves to keep the water level in the drum as close to set point as
possible. Variable-speed boiler feed pumps are sometimes used to control the level instead of valves.
The simple feedback control design described above is called single-element control, because it uses
only a single feedback element for control the drum level measurement.

Drum Level Controller Tuning


1. Integrating Process
From a controls point-of-view, the boiler drum is an integrating process. This means that any mismatch
between inflow (water) and outflow (steam) will cause a continuous change in the drum level.
Integrating loops are difficult to tune, and can easily become unstable if the controllers integral time is
set too short (i.e. high integral gain). The process-imposed requirement for a long integral time makes
the loop slow to recover from disturbances to the drum level.
2. Inverse Response
To further complicate matters, the boiler drum level is notorious for its inverse response. If the drum
level is low, and more feedwater is added to increase it, the drum level tends to decrease first before
increasing. This is because the cooler feedwater causes some of the steam in the evaporator to
condense, causing the volume of water/steam to decrease, and hence the drop in drum level.
Conventional feedback control has difficulty in coping with this inverse response. A control loop using
high controller gain and derivative action may work well in other level applications, but it will quickly go
unstable on a boiler drum level. Stability is best achieved by using a low controller gain, long integral
time, and no derivative. However, these settings make the controllers response very sluggish and not
suitable for controlling a process as critical as boiler drum level.

Major Disturbances
Drum level is affected by changes in feedwater and steam flow rate. But because of the very slow
response of the feedback control loop, changes in feed flow or steam flow can cause very large
deviations in boiler drum level. Single-element drum level control can work well only if the residence
time of the drum is very large to accommodate the large deviations, but this is seldom the case
especially in the power industry. For this reason, the control strategy is normally expanded to also
include feedwater and steam flow.
Two-element Control (Cascade Control)
Many boilers have two or three feed pumps that will be switched on or off depending on boiler load. If a
feed pump is started up or shut down, the total feedwater flow rate changes. This causes a deviation in
drum level, upon which the drum level controller will act and change the feedwater control valve position
to compensate. As explained above, the level controllers response is likely very slow, so switching feed
pumps on and off can result in large deviations in drum level.
A faster control action is needed for dealing with changes in feedwater flow rate. This faster action is
obtained by controlling the feedwater flow rate itself, in addition to the drum level.
To control both drum level and feedwater flow rate, cascade control is used. The drum level controller
becomes the primary controller and its output drives the set point of the feedwater flow controller, the
secondary control loop. This arrangement is also called two-element control, because both drum level
and feedwater flow rate are measured and used for control.

Two-Element Drum Level Control

Three-element Control (Cascade + Feedforward Control)


Similar to feed flow, changes in steam flow can also cause large deviations in drum level, and could
possibly trip the boiler. Changes in steam flow rate are measurable and this measurement can be used
to improve level control very successfully by using a feedforward control strategy.
For the feedforward control strategy, steam flow rate is measured and used as the set point of the
feedwater flow controller. In this way the feedwater flow rate is adjusted to match the steam flow.
Changes in steam flow rate will almost immediately be counteracted by similar changes in feedwater
flow rate. To ensure that deviations in drum level are also used for control, the output of the drum level
controller is added to the feedforward from steam flow.
The combination of drum level measurement, steam flow measurement, and feed flow measurement to
control boiler drum level is called three-element control.

Three-Element Drum Level Control

Low-load Conditions
Although three-element drum level control is superior to single- or two-element control, it is normally not
used at low boiler loads. The reason is that steam flow measurement can be very inaccurate at low
rates of steam flow. Once the boiler load is high enough for steam flow to be measured accurately, the
feedforward must be activated bumplessly.
For help with tuning your drum level controller, or for process control or boiler training, pleasecontact
me at OptiControls.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 7. Control Strategies

7 Responses to Drum Level Control

Brent:
July 1, 2012 at 11:10 pm
If youre trying to design a feedforward for an integrating process, such as a boiler steam
drum level, how do you set up the lead-lag? In your book the FFlead should be the time
constant of the process response and the FFlag should be the time constant of the
disturbance response. But there is no time constant in an integrating process. My guess
would be to use the integration rates instead of the time constant. Is this correct? Thanks.

Jacques:
July 10, 2012 at 4:51 pm
Brent,
The integration rates you refer to are equivalent to the process gain of an integrating process.
So if there were to be a difference between the rate at which the drum level changes after a
change in steam flow versus feedwater flow, you would compensate for that with the
feedforwards gain. Generally, if your steam and feed flows are measured accurately, the
integration rates will be the same, so the FF gain will be 1.0.
You ask a good question about tuning the lead-lag. Generally, on an integrating process
(excluding drum level), any lags in the process will show up as dead time because of the way
we model the process. So you will set your lead equal to the dead time after a change in
control action, and your lag equal to the dead time after a disturbance.
However, for drum level it is not so straightforward because of the drum levels inverse
response. From my experience, people normally dont bother with a lead-lag on drum level
control. Sam Dukelow suggested using a lag on the steam flow signal to compensate for the
inverse response. I have not tried it out, so I cant speak to its effectiveness. If you tune boiler
controls and dont have Sam Dulelows book, I higly recommend getting it.

ajit laware:
August 11, 2012 at 11:50 pm
What are major limitations of PID controller for boiler-drum level ?
Can we use any robust controller like H2, H infinitey or sliding mode controller ?
Whether this research beneficial for idustry ? In what point of view ?

Jacques:
August 12, 2012 at 8:15 am
Ajit,
1. The limitations of PID for drum-level control result from the drums inverse response. High
controller gains (that can normally be used on level control of non-surge tanks) cannot be
used on drum level because the loop goes unstable very easily. The same goes for using
derivative control mode.
2. You could probably get slightly better response with a properly-designed model-based
controller, provided that the inverse response is modeled accurately. I have not seen this used
in practice. The standard design is to use a feedforward from steam flow because it gives a
response vastly superior to the capabilities of any feedback control.
3. I dont think improving feedback control for drum level will be widely adopted in industry

because: a) a feedforward will still be the primary control action, b) industry is reluctant to use
advanced control technologies where its benefits are marginal (especially the power industry).

Benny:
September 14, 2012 at 1:17 pm
I worked at a power plant, the drum level was the classical three element control system
using circa 1950s pneumatic controls. The controllers were completely worn out due to their
age (40 years in service). I put in a proposal to have them upgraded to digital controls. Two
units were retrofitted. Unit 2 went into service with very little fuss. Unit 1, however, made me
pull my hair. Through luck I found out that the non-return valve (NRV) between the
economizer inlet and the feed pumps was defective (not closing), After it was repaired the
loop worked flawlessly, it even kept the drum level close to set point after 3 coal feeders out of
5 tripped during a test. In a nutshell dont only look at your transmitters, controllers and final
control elements keep an eye on anything in those pipes such as NRVs.

DOST MUHAMMAD:
September 22, 2012 at 7:31 am
Why at the start up of boiler the level of drum is control by single element control and on
which stage or load it should be change over to three element control?

Jacques:
September 22, 2012 at 6:50 pm
Muhammad, Flow measurements for feedwater and steam get less accurate as the flow rates
decrease. Therefore, only single-element (drum level) control is used under low flow
conditions. You can switch to three-element control when the flow measurements become
more accurate, typically around 25% of maximum flow.

Improving pH Control
January 29, 2012

Conventional pH Control
The control of pH is at best very difficult with a conventional PID control loop (Figure 1). The challenge
results from variable product flow rates and the highly nonlinear pH titration curve.

Figure 1. Conventional pH control - not recommended.

The pH of a liquid stream (lets call it the product) is controlled by adding a flow of acid or base (called
the reactant). To achieve the required product pH, a certain (but often unknown) ratio of reactant is

needed. And here is the first key to pH control: we need to manipulate the ratio of reactant flow to
product flow.
Ratio pH Control
We should not simply manipulate the reactant flow independently of the product flow (as in Figure 1),
because every time the product flow rate changes, the pH will first go off spec and then the pH controller
will change the reactant flow to return the pH to its set point. With ratio control (as in Figure 2), if the
product flow rate changes, the reactant flow rate is changed immediately to maintain a constant ratio
between it and the product flow rate. The pH controller then manipulates this ratio to control the pH.

Figure 2. Ratio pH control - much better.

Advanced pH Control
As you probably know, pH control is a very nonlinear process the gain of the process changes with
pH. The process gain is very high around the equivalence point, and much lower elsewhere (Figure 3).

Figure 3. pH titration curve.

Because the process gain changes so significantly, we should dynamically adjust the controller gain to
compensate. This is done by implementing gain scheduling to adjust the controller gain based on pH.
To design the gain scheduler we should determine the process gain at a few points along the titration
curve by changing the ratio at different levels of pH and determining the process gain. Process gain =
(change in pH in % of full scale) / (change in Ratio in % of full scale). We should also measure the
process dynamics (dead time and time constant) at each point (although these will likely be quite
constant throughout).
Then we calculate the controller gain settings that the controller should use at various pH levels, and
implement these in a controller gain scheduler (Figure 4). Obviously we should also calculate the

integral setting, and if used, the derivative setting but these will remain constant and can be set directly
in the controller.

Figure 4. Advanced pH control.

The gain scheduler can be implemented with a standard f(x) curve or characterizer block provided in
modern control systems. The gain schedulers input should come from the measured pH and its output
should set the controllers gain accordingly. And voila! We have good pH control.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Ratio Control
August 31, 2011

While I was recently helping a chemical company optimize several of their critical control loops, I noticed
they had a ratio controller in manual control mode. I asked about the loop and they told me it has never
worked in automatic control.
I occasionally come across loops that never have worked in automatic control. Sometimes its a tuning
problem, sometimes its an issue with the measurement or control valve, one time the control direction
was wrong go figure. However, when it is a ratio control loop thats not working, most often the
problem lies with the design of the control strategy.

Ratio Control Explained


Process design and operations often calls for keeping a certain ratio two or more flow rates. One of the
flows in a ratio-control scenario, sometimes called the master flow or wild flow, is set according to an
external objective like production rate. The ratio controller manipulates the other flow to maintain the
desired ratio between the two flows. The flow controlled by the ratio controller is called the controlled
flow. For example, when treating drinking water with chlorine, the water is the wild flow, and the chlorine
is the controlled flow.
There are two fundamentally different designs for ratio control. One of them is the correct design, the
other one does not work in practice.

The Intuitive but Incorrect Design


In this design, the ratio is calculated by dividing the one flow by the other. This calculated ratio is then
used as the process variable for a ratio controller. This design creates a highly nonlinear control loop of
which the process gain is inversely proportional to the flow rate in the denominator. Ratio controllers like

this are frequently dead (virtually no control) or unstable. This design should be avoided at all times, and
if you have one of these, correct the design!

The Correct Design


The wild flow should be multiplied by the desired ratio to calculate a set point for the controlled flow. A
standard flow controller then controls the flow according to this set point. If required, you can divide the
controlled flow by the wild flow to display the actual ratio to the operator, but dont use it to control the
ratio.
The figure below shows the two ratio-control designs. Ratio control should not be based on a division of
flow rates as shown on the left, but rather on calculating a flow set point, as shown on the right.

Two designs for ratio control. Left: The incorrect design. Right: The correct design.

For example, the correct design for a ratio controller of fuel and air is shown below. Fuel is the wild flow
and air is the controlled flow.

Controlling the ratio of combustion air to fuel flow.

I have seen several ratio controllers running in manual control mode because of tuning problems. After
further investigation it often turns out that the incorrect design is in use. Now that you know the
difference, you know what to look for and how to correct it.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 7. Control Strategies, 8. Case Studies

4 Responses to Ratio Control

Teo:
September 19, 2011 at 9:40 pm
Hi, your article is greatI am able to understand it even I dont have chemical engineering
background (I am mechatronic engineering student).

lndas:

October 18, 2011 at 11:13 am


Thank you for the article. I could get now a fair idea about ratio control

Matt:
November 10, 2013 at 1:45 am
Hello, thanks for this website.
Question: what about you have a second control valve on wild flow line? would it have a
dedicated controller or there might be a function out of existing controller?

Jacques:
November 10, 2013 at 2:42 pm
Matt, it depends on the control objective. But you could potentially have two flow control loops
of which you change both setpoints based on something else, such as furnace temperature or
boiler pressure. The ratio between the two setpoints can be constant or it could be adjusted
by a third controller such as O2 in the flue gas.

Steam Temperature Control


September 8, 2010

Steam temperature is one of the most challenging control loops in a power plant boiler because it is
highly nonlinear and has a long dead time and time lag. Adding to the challenge, steam temperature is
affected by boiler load, rate of change of boiler load, air flow rate, the combination of burners in service,
and the amount of soot on the boiler tubes.
After separation from the boiler water in the drum, the steam is superheated to improve the thermal
efficiency of the boiler-turbine unit. Modern boilers raise the steam temperature to around 1000F (538C),
which approaches the creep (slow deformation) point of the steel making up the superheater tubing.
Steam temperatures above this level, even for brief periods of time, can shorten the usable life of the
boiler. Keeping steam temperature constant is also important for minimizing thermal stresses on the
boiler and turbine.
Steam temperature is normally controlled by spraying water into the steam between the first and
second-stage superheater to cool it down. Water injection is done in a device called an attemperator or
desuperheater. The spray water comes from either an intermediate stage of the boiler feedwater pump
(for reheater spray) or from the pump discharge (for superheater spray). Other methods of steam
temperature control include flue gas recirculation, flue gas bypass, and tilting the angle at which the
burners fire into the furnace. This discussion will focus on steam temperature control through
attemperation. The designs discussed here will apply to the reheater and superheater, but only the
superheater will be mentioned for simplicity.
BASIC FEEDBACK CONTROL
The simplest method for controlling steam temperature is by measuring the steam temperature at the
point it exits the boiler, and changing the spray water valve position to correct deviations from the steam
temperature set point (Figure 1). This control loop should be tuned for the fastest possible response
without overshoot, but even then the loop will respond relatively slowly due to the long dead time and
time lag of the superheater.

Figure 1. Simple Steam Temperature Control

CASCADED STEAM TEMPERATURE CONTROL


Because of the slow response of the main steam temperature control loop, improved disturbance
rejection can be achieved by implementing a secondary (inner) control loop at the desuperheater. This
loop measures the desuperheater outlet temperature and manipulates the control valve position to
match the desuperheater outlet temperature to its set point coming from the main steam temperature
controller (Figure 2). This arrangement is calledcascade control.

Figure 2. Cascaded Steam Temperature Controls

The spray water comes from upstream of the feedwater control valves, and changes in feedwater
control valve position will cause changes in spray water pressure, and therefore disturb the spray water
flow rate. The desuperheater outlet temperature control loop will provide a gradual recovery when this
happens. If the spray water flow rate to the attemperator is measured, a flow control loop can be
implemented as a tertiary inner loop to provide very fast disturbance rejection. However, in many cases
spray water flow rate is not measured at the individual attemperators and this flow loop cannot be
implemented.
GAIN SCHEDULING
The process dead time of the superheater increases with a decrease in boiler load because of the
slower rate of steam flow at lower loads. This will have a negative impact on the stability of the main
steam temperature control loop unless gain scheduling is implemented. Step tests need to be done at
low, medium, and high boiler loads, and optimal controller settings calculated at each load level. A gain
scheduler should be implemented to adjust the controller settings according to unit load. Because of the
changing dead time and lag of the superheater, the integral and derivative times must be scheduled in
addition to the controller gain.
The gain of the desuperheater outlet temperature loop will be affected greatly by steam flow rate.
Changes in steam flow rate will affect the amount of cooling obtained from a given spray water flow rate.
Less cooling will occur at high steam flow rates. In addition, at high loads the pressure differential
between the feedwater pump discharge and steam pressure will be lower, reducing the spray flow rate
for a given spray valve position (assuming the absence of a flow control loop on the desuperheater
spray flow). To compensate for these nonlinear behavior, controller gain scheduling should be
implemented on the desuperheater outlet temperature loop too. Fugure 3 shows the basic design of the
steam temperature controller gain scheduler (cascaded controller is not shown for clarity). Similar to
tuning the main steam temperature control loop, step tests must be done at low, medium, and high
boiler loads to design the gain scheduler.

Figure 3. Steam Temperature Controller Gain Scheduling

FEEDFORWARD CONTROL
During boiler load ramps in turbine-following mode, the firing rate is changed first, followed by a change
in steam flow rate a while later. With the increase in steam flow rate lagging behind fuel flow rate, the
additional heat in the furnace can lead to large deviations in steam temperature. To compensate for this,
a feedforward control signal from the boiler master to the steam temperature controller can be
implemented.
The feedforward can use the rate of change in fuel flow or one of several other derived measurements
to bias the steam temperature controllers output. In essence, when boiler load is increasing, the spray
water flow rate will be increased to counter the excess heat being transferred to the steam, and vice
versa. The feedforward can be calibrated by measuring the extent of steam temperature deviation
during load ramps.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 7. Control Strategies

14 Responses to Steam Temperature Control

Karthi:
May 8, 2011 at 7:46 pm
Excellent effort.lucid description makes a great read.You can touch upon the integral windup
problem frequent in STC. I would like to see you write about the boiler-turbine co-ordinated
control.
Regards
Karthi

Jacques:
May 8, 2011 at 8:30 pm
Karthi,
You bring up a good point. When the desuperheater outlet approaches saturation
temperature, the inner loop should be blocked from adding more spray. The outer loops
controller should use external reset feedback to prevent integral windup. If this is not possible,
its integral term should be blocked under any one of these conditions:
- When the inner loops controller output is at 0% or 100% (this normally happens
automatically)
- When the inner loops output is blocked because of proximity to saturation, as described in
Zekes note below.
I have placed boiler-turbine coordinated control on my to-do list for the blog.

Thanks for your inputs.


Stay tuned!
Jacques

Allan Zadiraka:
August 3, 2011 at 9:08 pm
Jacques
In actual practice, you cannot permit the desuperheater outlet temperature reach saturation
temperature since you have no idea of the quality of the fluid other that it could be all
saturated liquid, all saturated vapor or some mixture of the two states. Unfortunately, turbines
and superheater tubes do not like water. The spray flow must be limited to a temperature
above saturation temperature for the pressure, typically 20 degrees F. This delta is needed to
account for thermocouple accuracy and drift as well as the thermocouple/thermowell time
response. In the few cases where it is necessary to spray to saturation, a simple temperature
based limit cannot be used.
zeke

Imran Ahmed:
August 10, 2011 at 11:31 pm
Hi ! It is is very informative detail. I am facing a problem at 210 MW Steam Turbine with 640
t/hr Babcock boiler.
Recently Emerson OVation DCS has been installed as a Retrofit job. Main steam temperature
cannot be increased from 480 C , also Air restriction is there, Air heaters are clear, air damper
( FDF dampers open almost 100%) but still air deficiency is there.
Can you give any particular reason on control side for low main steam temperature.
regards,
Imran

Jacques:
August 11, 2011 at 9:51 am
Imran,
From the information you gave me it is not possible to tell exactly what the problem is.
Are your measurements and controller outputs ranged exactly the same as they were before
the retrofit?
Jacques

SAJEESH:
September 18, 2011 at 11:25 am
hi.
I didnt get the cascade control theory basics.i have to control the main steam temperature
around 480 degree.Output of PID is limited to 0 to 100 corresponding to 4 to 20 mA for control
valve.so i f i give this value (0 to 100) to inner loop as setpoint to de superheater how the
cascade controller works.Please provide me more detailss

Jacques:
September 18, 2011 at 4:19 pm

Sajeesh,
The temperature controllers output has to be rescaled from its standard 0-100% to match the
range of the spray flow controllers set point (or process variable).
Some controllers (e.g. Invensys Fox I/A) allow you to rescale the output directly, while other
controllers (e.g. Honeywell Experion) do the scaling for you automatically. Yet other systems
(e.g. Emerson Ovation) require you to place a scaling block between the two controllers.
Jacques

Ravi Mishra:
January 24, 2013 at 3:42 am
Dear Sir,
During the Turbine follow mode operation its seems that, the main steam temperature have
large deviations during the ramp up and ramp down, even with feed forward loop is
implemented (from steam flow/BLI) because The firing rate is changed first, followed by the
change in steam flow rate.
So how can we generate the feed forward signal (logic) from the fuel flow or boiler master
demand to compensate this deviation? Can you give the that logic which can implemented to
reduce this problem?

Jacques:
January 24, 2013 at 11:21 pm
Ravi, there are several designs for this feedforward of which some seem to work better than
others depending on the particular situation, boiler design, fuel type, etc. Some use fuel flow,
or its rate of change, some use air flow, or its rate of change. Others use a combination of
steam and fuel flow that alters spray flow based on the relative difference between fuel and
steam flows. I recommend that you look at Sam Dukelows book, The Control of Boilers. It is
an excellent source of technical information on boiler controls.
- Jacques

Siby:
March 20, 2013 at 12:23 am
This is slightly off topic but still relevant question for Control engineers at a time when
Advanced process control schemes are becoming more prevalent. Is the use of a Model
predictive controller to provide set-points to the spray control valves for steam temperature
control a cost effective approach?

Jacques:
March 20, 2013 at 8:37 pm
I did a comprehensive study for the Electric Power Research Institute (EPRI) on the adoption
of Advanced Process Control / Model-Predictive Control (APC/MPC) in power plants.
Compared to the refining, and chemical industries, the power industry lags far behind in using
APC/MPC. On the flip side, the power industry is the forerunner with utilizing complex DCSbased control strategies. APC/MPC will do a fine job of controlling steam temperature,
especially if you control the burners individually, instead of just one common fuel demand.
Excess air, spraywater, burner tilts/recirc air/bypass dampers should all be used
simultaneously as control elements. ABB, Neuco, Invensys, and probably others, have
reported successes with APC on boiler plants. The power industry lacks the skills to
implement and maintain APC, and the cost benefits are just not there in many cases (except
perhaps for environmental controls).
- Jacques

Siby:
March 22, 2013 at 3:31 am
I read the article and found it to give an objective assessment on APC in power plants. Nicely
highlights the challenges involved in making APCs more acceptable.
Siby

H.B.R:
November 11, 2013 at 11:53 pm
Hi, I read your article impressively.
I would like to ask you if its possible to control steam temperature using characteristic curve.
In the plant where I worked, the main and re-heat steam temperature control loop is cascade
without feedforward demand. At first, the main and re-heat steam temperatures swinged and
it affected MW and steam pressure. To avoid that, we applied charecteristic curve(o.g f(x)
function in ovation) opening TCV position more than PID manipulated position to compensate
the dead-time(time delay). Actually, it was the idea of my boss and Im curious if its proper to
use a characteristic curve for steam temperature control.

Jacques:
November 12, 2013 at 6:51 am
H.B.R. Characterizers are used to compensate for some type of nonlinear process behavior,
or to obtain a nonlinear control action where one is needed. A characterizer can be used very
effectively in feedforward control where the relationship between the disturbance and the
required compensating control action is nonlinear. It sounds like this is what your boss did,
even though the design might have been different from normal. If there is a strong relationship
between (e.g.) fuel input and spray valve position required to maintain reheat steam
temperature, using a feedforward with a characterizer would be appropriate. However, you
will likely also require some degree of feedback control to compensate for other variables
such as different burners in use, boiler sooting, etc.

8. Case Studies
o
A pH Control Success Story
o
An Oscillating Level Control Loop
o
Butterfly Valves and Control Performance
o
Caster Level Control Improvement
o
Flow Control Conundrum
o
How to Fill a Container
o
Inverse Response
o
Level Versus Flow Control
o
Pressure and Flow Control Loop Interaction
o
Process Oscillations from Afar
o
Ratio Control
o
Tank Level Tuning Complications

A pH Control Success Story

March 30, 2013

I recently helped a major chemical company improve the controls of one of their processes that was
plagued by a persistent oscillation. The process consisted of a large process loop and the reaction was
highly dependent on the pH in the loop. The pH oscillated, driving the oscillation in the entire process. If
the pH control could be stabilized, the entire process would stop oscillating.
The pH was controlled with the addition of ammonia. The pH controller directly manipulated the
ammonia flow control valve (Figure 1). The engineers and operators told me that the ammonia pressure
fluctuated and that this affected the ammonia flow rate and consequently affected the pH. They
suspected that the pH control loop and the ammonia-production boiler may even be oscillating against
each other.

Figure 1. pH control design, as found.

To reduce the effect of ammonia pressure on the flow rate, we implemented a flow controller to keep
constant the ammonia flow rate, regardless of the upstream pressure. This flow controller would get its
setpoint from the pH controller classical cascade control. Then, because we wanted to have very tight
pH control, we needed to compensate for the change in process gain induced by the nonlinear pH
titration curve. We obtained a titration curve from the lab, converted it to a process gain curve (i.e. the
change in pH / change in ammonia flow), and normalized the curve around the pH setpoint of 3.5
(Figure 2).

Figure 2. Normalized pH gain curve.

We configured the normalized pH gain curve in a characterizer function block that used pH as its input,
and produced the estimated process gain as its output. Then we implemented a parameter for adjusting
the gain, divided this gain by the output of the characterizer, and used the result to schedule the gain of
the pH controller (Figure 3). Since the flow rate in Loop 1 remained relatively constant, we did not have
to use ratio control.

Figure 3. Advanced pH control with a cascaded flow control loop and gain scheduling.

With the flow controller in place, we did small step-tests around the pH control point of 3.5 to establish
the process dynamic characteristics. This was a major challenge, since the pH varied wildly with the
controller in manual. (Ill write an article about step-testing volatile/turbulent processes sometime,
because I often have to deal with this problem). Once we had good estimates for the process gain, dead
time, and time constant, we tuned the controller for a fast response.
The results we obtained were very good. Despite the substantial variance in the ammonia pressure, the
pH control loop remained stable and controlled the pH very close to its setpoint.

Figure 4. Results of advanced pH Control.

After this, we tuned the ammonia pressure controller and several other loops. This further stabilized the
process and reduced the pH variance even more. But it all started with our improvements on the pH
loop.
Stay tuned!
Jacques F. Smuts
Principal Consultant of OptiControls, and author of Process Control for Practitioners

An Oscillating Level Control Loop


August 3, 2011

When I do onsite control loop optimization services I often see level controllers oscillating. Most often
they oscillate because of one or more of the following reasons:

The control valve has a dead band. (Yes, level loops with dead band oscillate continuously if
you are using a PI or PID controller.)

The control valve has stiction.

The integral time is set too short for the amount of controller gain being used.

However, these are not the only problems and I have often been amazed at the actual cause of
oscillations.
So to keep me from guessing, I systematically analyze a loop for problems before I tune the controller. I
always try to follow the same basic sequence of tests, and then delve deeper into any problems I notice.
The sequence of tests:
1.

See how the loop performs in automatic control under normal operating conditions.

2.

Do a set point change (this is very helpful for various reasons that Ill write about in future).

3.

Place the controller in manual.

4.

Do various valve performance tests (these can be quite challenging on a level loop).

5.

Assuming no insurmountable problems were found, do step-tests for tuning (if you dont have
enough data already).

6.

Tune the controller and repeat steps 2 and 1 to ensure the loop meets its performance
objectives.

Some (non-readers of this blog) may try to address all control problems with tuning. But the simple
steps listed above have served me very well over the years and I often smile to think that someone
could be wasting hours of fruitless tuning if a loop really has other problems.

Case in Point
A few weeks ago I was optimizing the control performance of loops an oil platform in the Gulf of Mexico.
Quite early in the project I got to a level control problem on one of their separator vessels. Step 1 of my
test sequence revealed the oil level control loop was oscillating. The period of the oscillations was
slightly shorter than one minute.

Level loop oscillating.

Going on to Step 2, we made a set point change. I noticed the loop actually performed very well on the
change in set point (ignoring the oscillation). From that I concluded the problem is not the controllers
tuning.
I also noticed that the process variable took about two minutes to cross over its set point for the first
time and about six minutes to settle out at set point. This meant the response time of the loop was far
slower than the period of its oscillations. It would be impossible for stiction or dead band to cause the
loop to oscillate with a one-minute period if it takes the loop so much longer to reach set point. Although
I would later test for stiction and dead band, I basically ruled them out as causes of the oscillation.

Loop performed well on a set point change.

Step 3 calls for placing the controller in manual control mode. This provides a good test to see if the
oscillations are caused by something in the control loop. We placed the loop in manual, and the
oscillations continued. At this point I concluded that the oscillations were not caused by controller tuning,
stiction, or dead band.

Oscillations continued with controller in manual.

So what could it be? A quick inspection of the level control valve indicated that the valve was rock-solid
in holding its position with the controller in manual. The oscillations were not coming from the valve and
therefore they had to be coming from the process. We looked at time trends of the flow rate into the
separator vessel and the gas pressure inside the vessel, but these were not oscillating (at least not at
one-minute periods).
Then we found the cause. The vessel is a three-phase separator: gas, oil, and water. The oil floats on a
layer of water in the bottom of the separator. It was the oil-water interface level that was oscillating,
moving the oil level up and down with it. After some more investigation, we found the water level control
loop was operating virtually in on-off control mode. Only then could we focus on solving the real
problem.
We are all sometimes tempted to tweak controller settings without looking any further, but a systematic
approach to analyzing control loops and solving control problems really pays off.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 6. Loop Performance, Problems, and Diagnostics, 8. Case Studies

4 Responses to An Oscillating Level Control Loop

Mary:
September 26, 2011 at 11:08 am
How did you solve the problem?

Jacques:
September 26, 2011 at 8:25 pm
We did step tests and properly tuned the water level controller. That solved the problem.

Nhan:
September 25, 2012 at 1:56 am
Which tuning rule did you apply for the oil level control loop, level averaging or lambda?
Thanks

Jacques:
September 25, 2012 at 3:07 pm
Nhan, the tuning rule of choice should always depend on the application:
- For fast response, the Ziegler-Nichols rules for integrating processes work well, provided
you divide the controller gain by two, and multiply the integral time by two.
- For slow response, I recommend level-averaging (I still need to write an article about it).
In the case of the oil level, I proposed using level averaging to make maximum use of the
surge capacity of the separator, but the operations personnel wanted the oil to stay as close
to setpoint as possible (fast control). So we ended up using Z/N for tuning the oil level loop.
I think Lambda tuning for levels is good as an academic exercise, but I dont see its need for
tuning level controllers, and I have never used it for tuning levels. However, theLambda tuning
rules certainly have a place with self-regulating processes.

Butterfly Valves and Control Performance


February 23, 2012

Because butterfly valves cost less than real control valves like globe valves or characterized ball
valves, they are sometimes used in place of control valves to save money. This decision is often costly
in the long term because of the poor control performance resulting from butterfly valves.
Late last year I optimized several control loops at a mid-sized manufacturer of specialty
chemicals. Similar to most plants I have worked at, I found a number of control loops that were
oscillating. Many of them oscillated because of valve stiction, incorrect controller settings, or process
interactions. One of the loops, a distillation column level control loop, oscillated as a result of using a
butterfly valve as the final control element.

Figure 1. Oscillating level control loop.

To perform well, a PID control loop needs (among other things) that the process gain remains constant.
In other words, the process variable must change linearly with changes in controller output. A small
degree of nonlinearity can be tolerated, especially if we apply robust tuning methods, but if the process
gain changes by more than a factor of 2, we can expect control problems. And this is why a butterfly

valve makes a poor choice for a control valve it has a highly nonlinear, S-shaped flow curve, as shown
in Figure 2.

Figure 2. Typical butterfly valve flow characteristic.

Figure 3 shows how the gain of a typical butterfly valve changes from less than 0.2 to almost 3 over the
span of the controller output. The process gain varies by a factor of 15! This large variation in process
gain makes it impossible to have consistently good control at all valve positions.

Figure 3. Typical butterfly valve gain.

At the chemical company the butterfly valve was used to control the bottom level of a distillation column.
The distillation column was the last one in a train of three columns, of which each column had
a progressively smaller diameter. Moderate increases in feed rate to the first column easily caused highlevel alarms when they propagated to the small final column. The level controller originally seemed to be
responding too slowly to handle these upsets, so the loop tuner increased the controller gain to achieve
fast response at high flow rates. However, at normal flow rates, where the process gain was 15 times
higher, the loop was unstable and oscillated continuously as shown in Figure 1.
The correct solution to this problem would have been to replace the butterfly valve with a control valve
that has a linear flow characteristic and then retune the control loop. However, this could only be done
during the plants annual maintenance shutdown. In the mean time we installed a characterizer to
linearize the butterfly valve (Figure 4). The characterizer compensated for the butterfly valves
nonlinearity and made the flow through the valve follow the controller output in a reasonably linear
fashion.

Figure 4. Level control loop with characterizer.

With the characterizer in place we retuned the controller. After this the oscillations stopped and the loop
performed much better than it did before. However, the control performance was still not as good as
what a linear control valve would have provided. The real solution to the problem remained replacing the
butterfly valve with control valve, but this had to wait for the next maintenance shutdown.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 5. Control Valves, 6. Loop Performance, Problems, and Diagnostics, 7. Control Strategies,8.
Case Studies

3 Responses to Butterfly Valves and Control


Performance

mohamed elsadig:
May 18, 2013 at 11:11 pm
Dear Jacques,
would you please the implementation of the Characterizer is in term of software (functional
block or hardware).
thank you
mohamed

Jacques:
May 19, 2013 at 7:59 am
Mohamed, in this case we implemented the characterizer in the DCS using a function block.
You could also do the characterization in the valve positioner if the positioner supports it (most
digital valve positioners do). My preference is to do it in the control system because if the
positioner is replaced, the new positioner might be put in service without the characterizer.

H.B.R:
May 27, 2014 at 2:16 am
Dear, Mr. Jacques.
Im an I&C engineer working for an EPC contractor. In the site where I worked last year, there
was the exactly same problem as described in the figure no.4 of this article.
The process was distillate water level and it was controlled by the butterfly type control valve.
The process value was hunted like figure no.1 but with bigger magnitude. I think the problem
was because of the actuator or positioner. When PID output sent a demand signal to the

valve, the positioner feedback value followed the demand signal after 1~2 seconds.(i,e. there
was a deadtime in the actuator.) I couldnt detect that it was caused by the sticky actuator or
positioner problem because the plant was under the commercial operation.(now, Im working
the head office and that problem is still remaining in the site.)
Can you imagine the status of the process?? Its oscillated with really big magnitude.(bigger
than figure no.1)
The action I did during the commissioning was only adjusting PID value at that time due to the
tight commissioning time. After that, only the oscillation magnitude became smaller and the
symptom alarming every minute disappeared but still big oscillation has existed.
Thank you for your helpful information.

Caster Level Control Improvement


January 31, 2013

Recently, I helped a foundry with a level control problem in their casting process. A batch of metal is
melted in a furnace, after which the furnace is slowly tilted to pour the metal into a trough above the
caster. The level of molten metal in the caster trough must be kept constant so that the metal flows into
the mould at a constant rate. This is done by manipulating the tilt rate of the furnace. The foundry had
problems maintaining a constant level in the caster trough. An investigation of the system and
equipment revealed the problem.
System Description
The level of the molten metal in the casting trough is measured with a non-contact level sensor and sent
to a PID controller. The controller compares the level to its setpoint and manipulates the valve that
controls the furnaces tilt rate (Figure 1). If the level is below setpoint, the PID controller opens the valve
more and the furnace tilts faster. Likewise, if the level is above setpoint, the valve position is reduced.

Figure 1. Caster Trough Level Control (click to enlarge)

The Problem
The tuning parameters of any PID controller should be set according to the gain and dynamics of the
process it is controlling. A control loop can tolerate small changes in process characteristics, but large
changes will cause poor control, unless the control design somehow compensates for this. And herein
lay the problem during the casting process the process gain changed vastly.
At the beginning of the cast, when the molten metal in the furnace has a large surface area, a 1 change
in tilt angle will pour a large quantity of metal into the caster trough. At the end of the cast, when the
furnace is tilted significantly and the molten metal has a small surface area, a 1 change in tilt angle will
pour only a small quantity of metal (Figure 2). This causes the process gain to change by a factor of
almost 10 during the casting process.

Figure 2. Origin of Process Nonlinearity

It is impossible to have good feedback control from a simple control loop if the process gain changes
this much. The loop performance will range from being close to instability (when the process gain is high
at low tilt angles early in the cast) to being very sluggish (when the process gain is low at high tilt angles
late in the cast). This is why the foundry had so much trouble with this control loop.
The Solution
The solution was to either use gain scheduling on the controller or to implement a linearizer between the
controller output and the process. Both of these would essentially keep the loop gain constant by either
changing the controller gain based on tilt angle, or by compensating for the nonlinear process gain at
different tilt angles. To simplify tuning, we chose the linearizer. The linearizer would multiply the
controller output by a certain factor that would be changed automatically, based on the furnaces tilt
position (Figure 3).

Figure 3. Level Control Improvement through Linearization

We used trigonometry to calculate the appropriate multiplier for different tilt positions and implemented
this into a function generator block in the control system. After this the loop was linear and the control
performance vastly improved.
When tuning control loops, it is always important to understand the process and its characteristics, and
how these characteristics might change in relation to the process conditions. A process control
practitioner should always look for the true reason of poor control. In many cases this goes far beyond
controller tuning.

Find out more about process nonlinearity, gain scheduling, controller tuning, and much more in my
book Process Control for Practitioners.
Stay tuned!

Flow Control Conundrum


October 15, 2011

I recently helped a control engineer review the control strategies used in his plant. The company was
experiencing some control problems and wanted a second opinion. While most of the controls were
designed correctly, we found a few areas requiring design changes. One of these areas had to do with
balancing flow rates through banks of filters in their water filtration plant.
System Description
Raw (river) water enters a clarifier to separate the heavy solids. The water flows over a weir and is
pumped through two filter banks in series. Each bank has three filters. The flow rates through the
individual filters have to be balanced for filter efficiency.

Filtration Plant P&ID - click to display full size

Control Design as Found


The clarifier had a level controller of which the output became the set point of the six flow controllers.
Actually the flow set point of each bank of flow controllers were divided by the number of filters in
service, but I left that detail out of the diagram to keep it simple.
Operational Problems
The site found that the second set of flow controllers would slowly increase their outputs until they were
all at 100% and the valves were wide open. Occasionally, when certain filters were out of service, the
first set of flow controllers would slowly increase their outputs until they were all at 100% and the valves
wide open, while the second set of valves controlled properly.
Equivalent Control Design and Problem
The multiple filters and flow paths obscured the design problem. If we simplify the hydraulic flow paths
and control design we end up with two flow controllers in series on the same pipe receiving the same
set point.

Equivalent (simplified) Control Design

And here is the problem: we effectively have only one process variable (the common flow rate), but we
have two controllers trying to control it. If one of the flow transmitters measure a slightly lower flow rate
than the other, its controller will open its valve to get more flow. Although this might work temporarily, the
flow rate in the other control loop will also have increased and that controller will close its control valve
to compensate, bringing the flow back down to the original level. Eventually the control loop reading low
will saturate with its controller output at 100% while the remaining control loop will do the flow control.
You cannot control the same process variable with two control loops at the same time!
Alternative Design
We had to have a common set point for the second set of flow controllers, but it had to be independent
of the set point of the first set. Yet the two sets of controllers must in some way be linked to work in
unison. Although there are other ways to solve this problem, here is the way we did it. We used the
average position of the valves in the first bank as a set point to a position controller that controls the
average position of the valves in the second bank.

Alternative Design (only simplified version shown)

The position controller generates a flow set point to the second set of flow controllers that is
independent of the flow set point of the first set of controllers. Basically, the valve position controller
does not care what the actual flow rate is, as long as the valves in the second bank are on average at
the same position as those of the first bank. In this way calibration errors has a negligible effect on the
system as a whole.
After changing the design, we tuned the flow control loops, then the position controller, then the level
controller. After that, the system worked perfectly.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

How to Fill a Container


May 18, 2010

Recently, I helped a company with an interesting, but very commonly occurring problem. They fill
containers one-by-one and stop filling at an exact weight of material. However, the final weight always
overshoots its set point. A common analog to their setup is that of filling a tank or container with liquid to
an exact level.
The filling sequence begins by placing an empty container on a digital scale. The scale tares the weight
of the container so the fill can begin from zero weight. Then a PLC turns on a vibratory feeder to fill the
container. During the fill cycle the weight is continuously sent back to the PLC. The PLC shuts off the
vibratory feeder when the set point is reached. The filled container is then removed, and the next empty
one is positioned on the scale. Then the fill sequence is repeated. The set point is kept constant
between fills.

To improve the accuracy and repeatability of the fill, the company replaced the PLC program with a
continuous controller, i.e. a PID controller. This was a very good decision indeed. The newly
implemented PID controller worked very well, except that the weight always overshot the set point. And
thats when they contacted me.
I reviewed their setup and fill sequence with them and asked for the PID controllers settings. They were
using the P, I, and D modes of the controller. And there was the problem. If one uses the integral mode
in a controller filling a container the process will always overshoot its set point.
To work properly and not have overshoot, the controllers output must go to zero when the process
reaches set point. But when using integral, it tries to correct the error throughout the duration of the fill
and accumulates a positive value. When the set point is reached, this accumulated value keeps the
controller output at an elevated level, even though the error is zero. Then the process overshoots the set
point, the error goes negative, the integral term begins to reverse the accumulation and eventually the
controller output ends up to zero.
The solution is to use no integral action in a controller on systems like filling a vessel or container. Also,
in this setup derivative is of no use, so that should be set to zero to simplify matters.
However, in the customers particular setup, simply turning off the integral term would create a new
problem. The bumpless transfer feature of the controller will ignore the initial error when the controller is
placed in auto at the beginning of the fill sequence. The controller output will begin at zero (this is
desirable), but because of no integral action, nothing would drive the controllers output.
The solution was to set the set point to zero before the fill begins. After the tare, the controller is placed
in automatic mode. Then the set point is changed to the desired weight. The controller output responds
to the set point change and the fill begins. As the container fills up, the error gets less and the controller
output eventually settles back to zero in a nice exponential fashion as the set point is reached. Voila, no
overshoot! Once the controller output is back at zero, it can be turned to manual control mode to be
ready for the next fill.
With proportional control only, the controller is also very easily tuned by adjusting the proportional gain
(only active setting) to change the controllers response.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 8. Case Studies

2 Responses to How to Fill a Container

Avihu Hiram:
July 19, 2012 at 3:45 pm
I was searching for some material on Signal Filtering application (in your site) with no
success.
Do you have some thing about Practical Filter design?

Jacques:
July 22, 2012 at 7:52 am
Avihu, I describe the practical application and design of filters in my book Process Control for
Practitioners, available from Amazon.com http://www.amazon.com/gp/product/0983843813

Level Versus Flow Control


August 17, 2012

Earlier this year I did a controls optimization project on a process unit in a large chemical company.
They had a complex and interactive process, but their controls were mostly simple control loops with no

cascade, feedforward, gain scheduling etc. I am a firm believer in keeping things simple, but no simpler
than they should be which brings me to the story.
The unit had a huge tank to collect effluent from various parts of the process. The mixture was then
pumped back to the front end of the process for reprocessing. The tank had a discharge pump and a
valve for level control. Because of the chemical composition of the various effluents, small flakes of
solids formed in the liquid. The flakes would often plug up the control valve. As a result, the discharge
flow will get blocked and the level in the tank would slowly increase. The increasing level caused the
valve to slowly open, but the solids kept on blocking the valve as fast as it would open. This would carry
on until the high-level alarm on the tank triggered. Then the operator would stroke the valve open and
closed a few times to clear out the solids.
Before long, the plant discovered that the control valve will not get blocked if it moves often enough. In
an effort to make the system more reliable, the plant installed a flow control loop and gave the operator
a signal selector to switch between level control and flow control (Figure 1). Because the control valve
was much more active when driven by the flow controller, it seldom got blocked in flow-control mode.
Consequently, most operators then ran the system using flow-control mode.

Figure 1. Control strategy allowed operator to select between flow control and level control. (Click to enlarge)

You might have realized it already, but in flow-control mode there is no level control. So the tank is either
slowly filling up or draining down. This required the operator to check the level every few hours and
adjust the flow controllers set point based on the tank level at the time. If the level was too high, the
operator would increase the flow set point, and if the level got too low, the operator would decrease it
(Figure 2).

Figure 2. Control performance as found. Blue = tank level; Magenta = discharge flow rate; Green = level set
point.

Although they operated like this for years, the plant did put the problem on my list of loops to look at
because it remained a burden to the operators. The problem was that both level and flow control had
their advantages but also their disadvantages.
The solution was quite simple; it drew upon the advantages of both level and flow control; and it
eliminated the disadvantages of both modes. You guessed it we implemented cascade control. We
simply let the output of the level controller drive the set point of the flow controller (Figure 3). In this way
the level controller kept the level in check, and adjusted the flow set point to do so.

Figure 3. Cascade control had all the benefits, but no disadvantages.

The flow controller was active enough to respond to deviations from set point and clear out blockages
before the level was affected. We also tuned both controllers for optimal performance. Compared to
what they had before the changes, the new control performance was quite remarkable (Figure 4).

Figure 4. Control performance after implementing cascade control and tuning the two loops. Blue = tank level;
Magenta = discharge flow rate.

Learn more about advanced control strategies and controller tuning from the book Process Control for
Practitioners.
Stay Tuned!
Jacques Smuts
Founder and Principal Consultant
OptiControls Inc.
Posted in 8. Case Studies

2 Responses to Level Versus Flow Control

Vctor D. Parra:
August 18, 2012 at 8:03 am
Nice case study. Would you complete it explaining the advantage of using average level
control in these cases?

Jacques:
August 19, 2012 at 8:54 am
Victor, you are right we could have applied averaging level control because this tank was
originally designed to be a surge tank. However, the process engineer wanted the level to
remain close to its set point because the volume of liquid in this large tank affected the plants
conversion rate (efficiency) calculations. So we used standard level-control tuning rules for
this, but detuned it enough so that the outflow would not overreact to level disturbances.

Pressure and Flow Control Loop Interaction


December 6, 2013

Weird control strategies are sometimes used in lieu of proper tuning to address process interactions. A
customer contacted me with a question about such a control strategy. She explained that a blower was
being used for maintaining (negative) air pressure in a common clean-air header, and to deliver air flow
to process equipment. The blower was fitted with a variable-frequency drive, which was used to control
the pressure inside the clean-air duct, and a control valve was used to bypass air around the process
equipment to control air flow (Figure 1).

Figure 1. Process & Control Diagram

The weird part of the design was that the control strategy had an interlock that allowed only one of the
two controllers to run in automatic control mode at any time. My customer was told the by company who
installed the system that the interlock was necessary because interaction between the pressure and
flow control loops would cause system instability. She asked me to come to her plant to determine
if there was any possible way to control duct pressure and air flow rate at the same time, considering
that the two variables interact on each other.
To explore the problem on site, we did step changes on the two controller outputs, one at a time, while
recording pressure and flow. Then we reviewed the data to decide on the best path forward. It turned out
that changes in the pressure controllers output strongly affected both pressure and flow, but changes in
the flow controllers output mostly affected the flow, with a lesser, but still significant, effect on pressure.
The dynamic response of the two processes (dead time and time constant) was virtually identical. This
meant that if both control loops were tuned aggressively, there would be a significant likelihood of
cyclical interaction occurring between the two loops. Needless to say, both control loops were tuned
very aggressively. If the process dynamics were significantly different from each other, cyclical
interaction would not be such a concern.
We decided to suppress any possible cyclical interaction by tuning the two loops to have different
response times (one fast and the other slow) which will also cause one of the loops to have a
significantly damped response (which is very good for maintaining stability).
Since most of the disturbances were caused by other users on the common clean-air duct, we decided
to tune the pressure controller for a fast response. For this we used the Cohen-Coon tuning rules with a
stability margin of 2. Once the pressure controller was tuned, we put it in automatic control mode and
redid the step tests on the flow loop, to include the effect of the pressure controller running/interacting in
automatic control. We tuned the flow loop for a slow and damped response using the Lambda tuning
rules.
When both control loops were properly tuned, we removed the interlock and put the flow loop in
automatic control. As we expected, the system remained perfectly stable. We stopped and restarted one
of the other clean-air users and the system behaved beautifully with not a hint of cycling or instability.

The operators were delighted because they would no longer have to manually control either flow or
pressure. In addition, the variability in air flow rate and duct pressure was significantly reduced (Figure
2).

Figure 2. Improvement in flow rate variability

Stay tuned!
Jacques Smuts Founder and Principal Consultant, OptiControls
Author of Process Control for Practitioners

Process Oscillations from Afar


April 6, 2014

Oscillations in key process variables are highly undesirable, but their origins are often difficult to track
down and solve. The success of a hydrocarbon incinerator control-optimization project I executed at a
chemical plant last year was threatened by an inexplicable and seemingly unstoppable oscillation in the
incinerator differential temperature (and other control loops). At first I thought the oscillation was caused
by either poor tuning or cyclical interactions between two tightly coupled incinerator control loops, but
even after putting all the incinerators control loops in manual the oscillation persisted.
The oscillation was obviously coming from an external process and the most likely source was the
concentration of hydrocarbons in the vent gas being fed to the incinerator. The vent gas came from a
stripper column that stripped most of the hydrocarbons from reactor vent gas (Figure 1). Changes in the
remaining hydrocarbon content would affect the amount of heat released in the incinerator.

Figure 1. Simplified process diagram. (Click to enlarge.)

To determine if the problem originated in the stripper, I trended all the available temperature, pressure,
and flow-rate signals from the stripper. I found that the temperature of wash oil used as a stripping
medium oscillated at the same period as the loops on the incinerator. An oscillation in wash-oil
temperature would have a direct effect on the amount of hydrocarbons stripped from the vent gas, which
could explain why the incinerator loops oscillated.

But why did the wash-oil temperature oscillate? The wash oil was cooled by passing it through a heat
exchanger that vaporized liquid ammonia to cool the wash oil. The temperature of the wash oil exiting
the wash-oil cooler and flowing to the stripper column was controlled by manipulating a control valve on
the ammonia vapor discharge end of the cooler. The wash-oil temperature, controller output, and washoil coolers shell pressure were all oscillating at the same period as the incinerator differential
temperature (Figure 2). I asked the operator to put the temperature control loop in manual mode, after
which all the oscillations completely stopped.

Figure 2. Key variables oscillating in unison. Incinerator DT uses right-side Y axis.

By doing a few simple valve-performance tests I discovered that the ammonia vapor control valve was
sticking, causing a stick-slip cycle in the temperature control loop, in turn causing an oscillation in the
hydrocarbon content of the vent gas being fed to the incinerator, finally causing and oscillation in outlet
temperature (and therefore in the differential temperature across the incinerator the critical variable).
The wash-oil temperature measurement was located far away from the heat exchanger and this long
distance caused the oscillation to have a very long period. This fact later was key to solving the problem.
The temperature control valve stiction was on the order of 2% to 3% of the valves travel range. The
operator called in the control valve technician, who then went out to attend to the valve. They stroked
the valve open and closed a few times and the technician came back reporting that the valve works
perfectly. I explained to him (as nicely as I could) that the controller was not asking for 25% changes at a
time, but for 0.5% 1% changes, and the valve needs to respond to these small changes for us to have
good temperature control. He shrugged his shoulders, mumbled something about wanting a Prius to run
like a Ferrari, and walked out of the control room. We were not getting the valve fixed.
However, it was important to stabilize the temperature loop for the project to be a success. I tried
slowing down the tuning on the wash-oil temperature loop and even added some derivative control (to
kick the valve), but the loop kept on oscillating with the controller in auto. I did eventually solve the
problem, but I will write about that next month. For now, the main point of this anecdote is that
oscillations are sometimes caused by a source far away from the process one wants to optimize (Figure
3). The second point is that oscillations are very often not a tuning problem.

Figure 3. Process oscillations originating from afar.

Read next time how we worked around the problem caused by the sticky control valve.
Stay tuned!
Jacques Smuts, Founder and Principal Consultant, OptiControls
Author of Process Control for Practitioners

Posted in 8. Case Studies

2 Responses to Process Oscillations from Afar

Akis:
June 6, 2014 at 4:15 am
Do you prefer to leave the wash-oil TC at Manual until the problem is fixed?or at Auto despite
its the cause of the oscillation?

Jacques:
June 8, 2014 at 10:36 am
Akis: In this case we overcame the problem with the sticky control valve during the project, so
we did not have to leave the controller in manual. However, if we could not overcome the
problem, manual control would have been an attractive alternative.

Ratio Control
August 31, 2011

While I was recently helping a chemical company optimize several of their critical control loops, I noticed
they had a ratio controller in manual control mode. I asked about the loop and they told me it has never
worked in automatic control.
I occasionally come across loops that never have worked in automatic control. Sometimes its a tuning
problem, sometimes its an issue with the measurement or control valve, one time the control direction
was wrong go figure. However, when it is a ratio control loop thats not working, most often the
problem lies with the design of the control strategy.

Ratio Control Explained


Process design and operations often calls for keeping a certain ratio two or more flow rates. One of the
flows in a ratio-control scenario, sometimes called the master flow or wild flow, is set according to an
external objective like production rate. The ratio controller manipulates the other flow to maintain the
desired ratio between the two flows. The flow controlled by the ratio controller is called the controlled
flow. For example, when treating drinking water with chlorine, the water is the wild flow, and the chlorine
is the controlled flow.
There are two fundamentally different designs for ratio control. One of them is the correct design, the
other one does not work in practice.

The Intuitive but Incorrect Design


In this design, the ratio is calculated by dividing the one flow by the other. This calculated ratio is then
used as the process variable for a ratio controller. This design creates a highly nonlinear control loop of
which the process gain is inversely proportional to the flow rate in the denominator. Ratio controllers like
this are frequently dead (virtually no control) or unstable. This design should be avoided at all times, and
if you have one of these, correct the design!

The Correct Design


The wild flow should be multiplied by the desired ratio to calculate a set point for the controlled flow. A
standard flow controller then controls the flow according to this set point. If required, you can divide the
controlled flow by the wild flow to display the actual ratio to the operator, but dont use it to control the
ratio.
The figure below shows the two ratio-control designs. Ratio control should not be based on a division of
flow rates as shown on the left, but rather on calculating a flow set point, as shown on the right.

Two designs for ratio control. Left: The incorrect design. Right: The correct design.

For example, the correct design for a ratio controller of fuel and air is shown below. Fuel is the wild flow
and air is the controlled flow.

Controlling the ratio of combustion air to fuel flow.

I have seen several ratio controllers running in manual control mode because of tuning problems. After
further investigation it often turns out that the incorrect design is in use. Now that you know the
difference, you know what to look for and how to correct it.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 7. Control Strategies, 8. Case Studies

4 Responses to Ratio Control

Teo:
September 19, 2011 at 9:40 pm
Hi, your article is greatI am able to understand it even I dont have chemical engineering
background (I am mechatronic engineering student).

lndas:
October 18, 2011 at 11:13 am
Thank you for the article. I could get now a fair idea about ratio control

Matt:
November 10, 2013 at 1:45 am

Hello, thanks for this website.


Question: what about you have a second control valve on wild flow line? would it have a
dedicated controller or there might be a function out of existing controller?

Jacques:
November 10, 2013 at 2:42 pm
Matt, it depends on the control objective. But you could potentially have two flow control loops
of which you change both setpoints based on something else, such as furnace temperature or
boiler pressure. The ratio between the two setpoints can be constant or it could be adjusted
by a third controller such as O2 in the flue gas.

Tank Level Tuning Complications


November 4, 2013

Level control loops are strange creatures. This strangeness can make them difficult to tune. On
average, level control loops are tuned the worst of all process types. Although I have seen poorly tuned
loops of all types, poorly tuned level controllers typically have tuning settings that are the furthest from
optimal. Most level processes are very robust in nature, allowing them to function surprisingly well with
suboptimal tuning.
But it does not have to be this way. If controller tuning is based on the dynamic response of a process,
most level control loops are actually easy to tune and provide very robust control. However, as you
probably know, most control loops are tuned intuitively using trial and error. More often than not, this
approach results in poor control loop performance.
Case Study
A few weeks ago, I helped an engineer at a power plant with the tuning of a demineralized (demin)
water storage tank. It was a large tank about 40 feet (12 m) high and 20 feet (6 m) in diameter. Water
was pumped from the demin water production plant into the tank, and this flow rate was manipulated
with a control valve (Figure 1). Under normal operating conditions the unit consumesd demin water at
an almost constant rate (most of which was discharged through the continuous boiler blowdown).

Figure 1. Demineralized water storage tank level control.

To do the tuning correctly, the engineer executed a few step tests (Figure 2) and we analyzed the data.
We calculated the process integration rate (or process gain) to be 0.0045 / minute. This means if the
level is at steady state and the controller output is changed manually by X percent, it will take 1/0.0045
minutes (3.7 hours) for the level to change by the same percentage. The dead time was measured to be
roughly 2.5 minutes.

Figure 2. The two step tests used for tuning.

Once we had this information on the dynamic properties of the process, we used the modified ZieglerNichols tuning rules for Integrating Processes and calculated new tuning settings for this control loop.
We used a stability margin of 2.5 and obtained the following tuning settings:
Controller Gain (Kc) = 32
Integral Time (Ti) = 20 minutes.
The high controller gain was a concern. Although the level was quite smooth during our step tests, a
historical trend of level revealed some jittering was present at times. And since a 1% jitter in level would
cause the controller output to jitter by 32% (Kc x delta PV), we decided to use a lower controller
gain since tight control was not a requirement. We felt that Kc = 10 would be a good compromise
between control performance and jitter tolerance.
Tuning Complications
Many level loops have small integration rates (or process gains). Integration rate (ri) is inversely
proportional to the vessels residence time. Typically, the larger the tank, the smaller the integration rate.
The process with the smallest integration rate that I personally worked with was a city water reservoir,
which had a residence time of 48 hours (ri = 0.000347 / minute). For good control, a very low integration
rate theoretically requires a very high controller gain, sometimes in excess of 100. Practically we cannot
use controller gains of this magnitude because of the severe control action that would result from noise
and setpoint changes. (Note that one can also overcome severe control action by using a noise filter
and either the P&D-on-error control algorithm or a setpoint filter).
This mandatory reduction of the controller gain brings me to the reason why most level loops have
grossly suboptimal tuning settings. For integrating control loops (such as tank level), when you reduce
the controller gain you have to increase the integral time, otherwise the loop can become very
oscillatory.
Unenlightened tuners do not know of this requirement and end up using disproportionately short integral
times on level loops, resulting in very oscillatory behavior. When they try to stabilize the loop by further
reducing the controller gain, the situation deteriorates even more.
Example
For example, lets look at how Billy, our unenlightened but fictitious tuner, might have tuned the tank
level controller. Assuming he did step tests, he then used the original Ziegler-Nichols tuning rules (I did
mention he is unenlightened) for calculating the controller settings. He obtained the following controller
settings: Controller Gain (Kc) = 80 and Integral Time (Ti) = 8.3 minutes. He realized that the controller
gain of 80 was too high, and reduced it to 10. But he left the integral time at 8.3 minutes, as calculated.
Then he tested the new tuning settings and noticed overshoot and oscillations in level. Too much gain,
right? So he set the Kc value to 5 and retested the performance. The loop still oscillated with the
adjusted tuning settings, but he realized that this tuning effort was taking too much of his time, so he left
the tuning settings as they were and moved on to other work. Billys tuning results are shown in Figure
3.

Figure 3. The result of using decreased controller gains on a level loop, while leaving the integral time at the
originally calculated value.

How its Done


Now back to our own tuning efforts on the demin water storage tank. When the engineer and I reduced
the controller gain from 32 to 10, we simultaneously increased the integral time from 20 to 64 minutes,
which we calculated using the equation below.
Equation for calculating a new integral time when reducing the controller gain in a level loop:
Ti(new) = Ti(old) x Kc(old) / Kc(new)
Figure 4 compares the level loops response to a 5% change in outflow using the initial and refined
controller settings. The control loop is significantly more stable compared to the alternatives shown in
Figure 3.

Figure 4. Stable level control loop response obtained from increasing integral time while decreasing controller
gain.

As I said at the beginning, level controller tuning does not have to be difficult. Do step-tests to
understand the process dynamics, use proven tuning rules to calculate controller settings, and
remember to adjust the integral time inversely to any subsequent change you make in controller gain.
Stay tuned!
Jacques Smuts
Founder and Principal Consultant, OptiControls
Author of Process Control for Practitioners

Posted in 4. Controller Tuning, 8. Case Studies

2 Responses to Tank Level Tuning Complications

Nhan:
November 5, 2013 at 1:01 am
Dear Jaques,
Referred to the response curve, I calculate out the ri= 0.045/min, how come to 0.0045 as
said, please advise, thanks

Jacques:
November 5, 2013 at 6:50 am
Nhan, lets consider the first step change, dCO = 25% in size.
Before the step has any effect, the level decreased by 1% over 32 minutes. S1 = -0.031%/min
After the step change, the level increased by 1.1% over 13 minutes. S2 = 0.085%/min
ri = (S2 S1) / dCO = (0.085 + 0.031) / 25
= 0.0046 (which I rounded down to 0.0045 for convenience).

9. Tips and Work-Process


o
Best Practices for Control Loop Optimization
o
Diagnosing and Solving Control Problems
o
Process Control for Practitioners

o
o
o
o
o

Testing Control Loop Performance


Tools of the Tuner
Tuning Tips - How to Improve Your Results
When to Use which Tuning Rule
Why Tuning Rules Don't Always Work

Best Practices for Control Loop Optimization


April 15, 2011

Control Loop Optimization means improving the performance of control loops to get the best possible
performance from them. The improvement task is often attempted in an ad-hoc or trial-and-error
fashion, but this is mostly ineffective and seldom results in truly optimal loop
performance. Effective control loop optimization is done in a systematic way, by following the best
practices.
1.

Know Your Process. This item seems almost too obvious to be on the list, but it is often
tempting to address a control problem through tuning without considering the broader process.
Process knowledge provides guidance on the control objective, tuning rules to use, diagnostic
tests to do, and the process conditions under which to do the tuning. Things to know about the
process include: the process type (integrating or self-regulating), ratio of process lag to dead time,
if the process gain or dynamic characteristics might change under varying operating conditions,
type of final control element being used and its flow characteristics, disturbances to the process
and if they are measurable, possible negative side-effects from process-variable overshoot or a
rapidly changing controller output.

2.

Determine the Control Objective. Consider the following: Should the loop perform fast or
slow? Is overshoot tolerable? Should the controller output move a little as possible? Does the
controllers set point often change? Does the loop have to compensate for process
disturbances? The control objective will dictate the type of tuning method to use. The control
objectives could be fast setpoint tracking or fast disturbance rejection (of which each could have
sub-objectives such as minimum absolute error or minimum integral of error), zero processvariable overshoot, a specific process response to setpoint changes, minimum controller output
movement, and no overshoot in the manipulated variable. Surge tank level loops, for example,
should be tuned to minimize controller output movement while keeping the level between
predefined limits.

3.

Review the Control Strategy. Review the design of the control strategy with the aid of process
and instrumentation diagrams. Does the design support the control objective established above?
Are cascade, feedforward, ratio, and other control strategies required and applied correctly? Are
there interactive control loops? If so, how is that being handled? The control strategy should
support the control objective, given the broader process with its disturbances, nonlinearities, and
other nuances. For example, a simple feedback control loop will do an awful job if ratio control is
actually needed. Cascade control should be used only if the inner loop is much faster than the
outer loop.Feedforward control should be used to compensate for process disturbances, except
when the disturbances directly affect the flow rate through the final control element requiring
cascade control. When done correctly, control strategies can significantly contribute to control loop
stability and responsiveness. Unfortunately, the opposite is also true.

4.

Do a Plant Walk-Down. Inspect the size and layout of the process equipment and the
condition and location of the instrumentation and final control element (i.e., valve, damper, or

variable speed pump). Is everything in good condition and located in the right place? Considering
the size of the equipment, you should be able to get some sense of how fast or slow the process
will respond to controller output changes. This knowledge will help when doing step-testing.
5.

Examine the Measurement Device. Ensure the process measurement is good for the
application. Is the transmitter ranged appropriately? Is this the best sensing technology for the
process conditions? Is the device installed correctly?

6.

Evaluate the Use of Filtering. Check if transmitter dampening or process variable filtering is
being used. Transmitters should use an anti-aliasing filter, but no more. Filtering, if required,
should be done in the control system to simplify its adjustment and facilitate replacing the
transmitter without having to worry about filtering. Inspect a time trend of the process variable and
determine if filtering is required, and how much. If a process variable filter is used, its time
constant should be reviewed to ensure that it is set appropriately and significantly shorter than the
dominant process time constant.

7.

Test the Final Control Element. An improperly working final control hurts control loop
performance and can negate proper controller tuning methods. Typical problems include
deadband, stiction, a nonlinear flow curve, and positioner problems. These problems may appear
very similar to tuning problems, and an unknowing tuner may spend many hours of futile tuning if
the problem lies with the control valve. A few simple process tests should be done to detect and
diagnose final control element problems before any tuning is attempted. These problems will have
to be resolved for optimal control performance. Also, final control element problems can
significantly skew results from process tests and cause the calculation of completely incorrect
tuning parameters.

8.

Review the Controller Configuration. Modern, digital controllers offer a range of options to
optimize their performance for various situations. Setpoints can be ramped or filtered internally to
obtain a smooth control response even when the operator makes an abrupt change. Setpoint
changes can also be hidden from the proportional and derivative control modes. External reset
feedback prevents integral windup under adverse conditions, and rate-of-change limits can protect
sensitive equipment downstream. Check the controller algorithm and configurable controller
options before tuning the controller.

9.

Use an Appropriate Tuning Method. Contrary to popular belief, controller tuning is much
more science than art. Loop tuning can be done quickly and accurately based on the control
objective, process characteristics, and appropriate tuning rules. Process characteristics can be
determined by making a step-change in controller output and taking measurements from the
resulting process response. Although trial-and-error tuning is popular, it should be used only as a
last resort, for example with processes that are so volatile that it is impossible to get usable steptest data. As an alternative to manually calculating tuning parameters based on step-test results,
loop tuning software offers many helpful features such as identification of process characteristics,
producing tuning settings for different tuning objectives, providing simulations of anticipated loop
response, analyzing control loop robustness, and more. However, tuning software is only a tool,
and someone incapable of manually tuning controllers using step-test data and tuning rules will
likely also find it difficult using tuning software.

10.

Tune from Multiple Step Tests. Simulations may be 100% repeatable, but real processes are
not. Process disturbances, interacting control loops, nonlinearities, and operating conditions can
all affect measured process characteristics. Tuning from only one step test can result in poor
tuning settings if the process response at that instance was not normal for whatever reason. It is

essential to do multiple step tests to obtain average measurements of process characteristics,


and an appreciation of how much they change under normal conditions.
11.

Cater for Nonlinearities and Changing Process Characteristics. The installed flow
characteristic of a final control element is often not linear. In addition, the characteristics of many
processes change under different process conditions (production rates, equipment in service,
catalyst concentration, pH, etc.). Control valves and dampers might have to be linearized using a
characterizer, and changing process characteristics might require the scheduling of controller
parameters (called gain scheduling or adaptive tuning).

12.

Validate and Test the New Values. Compare the newly calculated controller settings with the
ones in the controller, and ensure that any large differences in numbers are expected and
justifiable. Implement and test the new controller settings. Ensure the controller is tuned to work in
harmony with the dynamics of the process it is controlling, and meeting the overall control
objective of the loop. First let the loop settle out and evaluate its performance under steady
conditions. Does it oscillate? Does the controller output move too much? If the loop should
respond to setpoint changes, make a setpoint change and look for overshoot, oscillations, or too
much controller output movement. If the loop should respond to disturbances, briefly put the
controller in manual, change the output by a few percent, and immediately put the controller back
in auto. This simulates a disturbance. Again, check for unnecessary overshoot, oscillations,
excessive controller output movement. Monitor the controllers performance periodically for a few
days after tuning to verify improved performance under different process conditions.

13.

Keep Records. Make note of the previous controller settings, the new settings, and the date
and time of change. You should keep a computerized or paper-based log of all changes to a
control loop. Leave the previous controller settings with the operator in case he/she wants to
revert back to them and cannot find you to do it. If the new settings dont work, you have probably
missed something in one or more of the practices above.

If the desired control objectives cannot be met by following these practices, including repairing faulty
equipment and making changes in control strategy, model-predictive control can be investigated as a
possible solution. The final and perhaps most expensive alternative would be to modify the process
equipment, but this is rarely needed. In the majority of cases the solution is available in the practices
listed above.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Diagnosing and Solving Control Problems


April 30, 2011

While many control loops are easy to tune and present almost no control problems, a few control loops
can be very problematic and never seem to control right. Control practitioners can spend many hours or
even days trying to improve the performance of these challenging control loops, but the results often
remain unsatisfactory. This article presents strategies for diagnosing control problems and improving the
performance of challenging control loops.

Symptoms of Poor Loop Performance


Although poor control performance come in many forms, it can be grouped into three categories:

Oscillations and instability the loop tends to cycle around its set point.

Large deviations from set point the loop struggles to remain at set point and the process
variable is frequently pushed away from set point.

Sluggish performance the loop takes too long to get to its set point after a disturbance or set
point change.

A Control Loop with Several Problems

Ive seen many cases where attempts to address poor performance were limited to controller tuning,
because the person attending to the problem did not know of all the other causes of poor performance.
To properly address and improve control loop performance, it is necessary to establish what the real
cause of the poor performance is, and then to take the appropriate corrective action.

Fault Diagnosis
To guide your diagnosis efforts, a fault diagnosis tree is provided below. The first level of diagnosis is the
three symptoms of poor control listed above. Depending on which of these symptoms your control loop
displays, you can find the possible causes below each symptom. These are described in more depth
throughout this article.

Diagnosing Control Problems

1. Oscillations
Oscillations can originate from within the control loop or be caused by external factors. To find out which
is the case, place the controller in manual and see if the oscillation stops. If it does, the oscillation is
generated from within the loop.

Oscillations Stopping when Controller is Placed in Manual

Internal Oscillations
Oscillations generated internally can be caused by faulty equipment or by tuning. Check first for faulty
equipment, because you can spend a long time trying to tune a loop if the real cause of poor
performance is the control valve.

The most common control valve problems causing oscillations are:

Control valve stiction. Do a stiction test with the controller in manual to determine if this is the
case.

Positioner overshoot. Do step tests of various sizes and be on the lookout for signs of
overshoot in the process variable.

Both of these control valve problems and cannot be fixed by tuning the process controller. The valve
needs maintenance or the positioner needs tuning.

Tuning
A loop that is tuned too aggressively (overly fast response) can quickly develop oscillations. Do step
tests on the process and determine the dominant process characteristics (gain, dead time, lag). Do
more than one step test (try to do four at least) in different directions. Then use tuning rules to calculate
new controller settings. If you are using rules designed to producequarter-amplitude damping, use only
half of the recommended controller gain. If you have tuning software, then use it to analyze the step-test
data and calculate new controller settings.

Nonlinear valve Characteristic


Many control valves control flow differently, depending on how far they are open. The valve is said to
have a nonlinear installed characteristic. If tuning is done at the one end, the settings might not work at
the other end, and could cause oscillations or sluggish behavior. If this is the case, a function generator
(X-Y curve) can be placed in the path of the controller output to cancel out the control valve nonlinearity.

Nonlinear Process
Some processes react differently based on operating point, production rate, or the product being made.
If these differences are large the loop can begin oscillating or become sluggish. Then different tuning
settings are required for the various operating conditions. This is called gain scheduling.

External Oscillations
Externally sourced oscillations can be caused by interactions between loops with the same dynamics or
simply by another loop in the process oscillating and causing several other loops to oscillate with it.

Coupled Interaction
Interactions between loops with the same dynamics can cause the two loops to fight each other. A
simple example of this is if two valves control the flow and pressure in the same pipe. Because the
dynamics of liquid pressure control loops and flow control loops are similar, the two controllers might be
tuned very similarly, causing the hunting between the two loops. To solve this, the most important loop
needs to be tuned for fast response, and the loop of secondary importance needs to be tuned
significantly slower (three times or longer settling time).

Dynamically Coupled Control Loops

Process Interaction
One loop in the process could be oscillating, causing several other loops in the same process to
oscillate with it. Use a process and instrumentation diagram (P&ID) to locate possible offenders. Then

use historical process trends of these other loops to find the oscillating loop. Several software vendors
like ExperTune, PAS, and Matrikon/Honeywell have products to help with locating the offending loop in a
plant-wide oscillation scenario.

2. Sluggishness
The next category of poor control loop performance is sluggishness. Sluggish control loop response can
be caused by equipment problems or by poor tuning.

Control Valve Dead Band


Dead band (also called hysteresis), can cause a loop to exhibit sluggish behavior. Every time the
process variable undergoes a disturbance in a different direction from the previous disturbance, the
controller output has to traverse the dead band before the valve begins moving. Dead band can be
detected very reliably through simple process tests. It is a mechanical problem and cannot be
addressed with tuning.

Other Equipment Problems


A control loop may also appear to have sluggish response if the controller output becomes saturated at
its upper or lower limit. Similarly, if the process variable runs into limits, the control action effectively
ends. Also, if the controller output has a rate-of-change limit, it may cause sluggish response, regardless
of how well the controller is tuned.

Tuning
Comments made earlier about tuning apply here too. Furthermore, realize that loops have internal
speed limits depending mostly on the length of the dead time in the process. It will take a well-tuned
loop three to four times the dead time to get back to its set point after a disturbance or set point change.
If disturbances cause large deviations from set point, and tuning is unable to correct it fast enough, see
the next section.

Upper Limit to Loop Speed any faster tuning will cause larger oscillations

3. Disturbances
The third category of poor loop performance is that of disturbances pushing the process variable away
from its set point. Disturbances are frequently the nemesis of good loop performance. As described
above, feedback control is limited in how fast it can eliminate the effects of a disturbance and bring the
process back to set point. Two classes of disturbances exist, depending on how they enter the loop.

Control-Flow Disturbances
Control-flow disturbances affect the loop by changing the flow rate through the final control element. For
example, if steam is used to heat the process flowing through a heat exchanger, and the pressure of the
steam decreases, the steam flow rate will be affected and this will disturb the outlet temperature.

Cascade control can be used very effectively to virtually eliminate the effects of a control-flow
disturbance. The outer loop controls the main process variable (temperature in this case) by changing
the set point for flow to an inner loop. The inner loop measures and controls the actual flow rate and
immediately corrects any deviations from set point.

Cascade Control for Handling Control-Flow Disturbances

Process Disturbances
In contrast to control-flow disturbances, all other disturbances to the process that affect the process
variable are simply called process disturbances. If a process disturbance is measurable, and its effect
on the process variable is known, feedforward control can be used to vastly reduce its impact.

Feedforward Control for Handling Process Disturbances

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 6. Loop Performance, Problems, and Diagnostics, 9. Tips and Work-Process

6 Responses to Diagnosing and Solving Control


Problems

Jam:
January 28, 2013 at 3:52 pm
Hello,
I have a flow control valve that operates OK in automatic. This valve only really operates
between 25-40%. However when the valve tries to maintain 25%, it keeps osciallating with the
around the setpoint. What could be causing this? Here are my theories:
1. Valve may have a non-linear characteristic and that may need to be placed on the output of
the PID I have yet to confirm this
2. Could the valve be oversized?
Are these correct and are there other factors that could be attributing to this?
Thanks

Jacques:
January 29, 2013 at 6:32 pm
Jam,
1. It could be that the valve has a quick-opening (nonlinear) type of flow characteristic. Is it a
butterfly valve? If so, see this article:
It could also be that the valve is sticky around 25% see this article:
2. It seems that your valve is a bit oversized control valves should ideally operate around
75% 85% open at design flow rates.
- Jacques

William Love:
February 19, 2014 at 8:35 pm
In an article or post that I cant find right now I recalled the author said you can prevent
excessive valve wear by not letting the valve move unless the PID output changed by more
than some amount.This idea was greeted with derision in a group discussion, so Im trying to
figure out whether the idea has any merit.

Jacques:
February 19, 2014 at 9:16 pm
As with many things in process control it depends. If you have a noisy measurement signal,
and a high-gain controller, your controller output will likely move around much more than you
like and wear out the valve prematurely. You have a few options, depending on your
controllers features.
1. You can filter the process variable (my preferred option), but realize that this adds
additional dynamics to the loop that requires slower tuning which slows down the loop
response. If your process is already a slow-responding process, the additional dynamics may
go virtually unnoticed, making it a very good solution. However, if you have very lowfrequency noise, you need a very long filter time-constant, which can make this an infeasible
solution. (Your filter time constant should be substantially shorter than your process dead time
and lag to achieve fast loop response).
2. You can also set a deadband around the process variable or controller output, which brings
me to your question. If apply a deadband, the controller will move the control valve only when
the limit of the deadband has been reached. The wider the dead band, the less often your
controller will move the valve (and make smaller movements). If you set dead band larger
than the noise, the controller does not respond until there has been a significant change in the
process. This effectively adds pseudo-deadtime to the loop, making the control loop behave
very sluggishly. So,, for fast loop performance, just as the lag in a measurement filter should
be set much shorter than your process dynamics, you should ensure that the additional
deadtime added by a deadband is also much shorter than your process dynamics.
3. You can also look at using a different measurement technology, if the problem is
measurement noise.
Also, make sure you are not using derivative control on a noisy measurement signal, and if
you have to use it, consider lengthening the controller scan time to reduce the gradients the
derivative term sees.

William Love:
February 20, 2014 at 1:13 pm
Just to clarify, the method Im describing involves ignoring a change in the PID output
(CV)unless it is more than some amount. If the CV = 45.0 %, I would hold that number in a
register and keep sending that to the valve. Id not change that signal going to the valve until
say CV 45.5% (my deadband is 0.5%). So if I saw the PID output reach 45.6%, Id update the
hold register and start sending that to the valve. Then, until CV 46.1% I would keep sending
the hold register value 45.6% to the valve. Is this what you thought I was saying?

One person opined this is equivalent to putting a deadband on the error between PV and SP
(which in Rockwell is implemented with a parameter called CV Zero Crossing Deadband.)
But Im not so sure.
Is my proposal to keep the valve at the hold value until the CV changes by more than a
deadband have a history in the field. I think I got the idea from a post by Greg McMillan and
thought it sounded good. To your knowledge has this been done?

Jacques:
February 20, 2014 at 6:33 pm
William, thanks for clarifying what you mean.
First, the 0.5% deadband seems far too small for reducing excessive valve wear.
Second, the method you propose artificially introduces the equivalent of stiction, which is very
bad for stability.
I dont know if it has been done, but I will advise against it.

Process Control for Practitioners


September 9, 2011

OptiControls is proud to announce the release of its flagship book, Process Control for Practitioners
How to Tune PID controllers and Optimize Control Loops. Written by Dr. Jacques F. Smuts, the author of
this blog and principal consultant of OptiControls, this new book is the ultimate practical guide to control
loop optimization.
Control loop optimization is not rocket science, but it is not trivial either. To be effective in optimizing the
performance industrial process control systems, an engineer or technician has to know the process and
its limitations, understand process dynamics, controllers and tuning, use the right techniques and tools,
and follow the optimization process systematically. He also has to know how to troubleshoot control
problems to find and fix their causes.
This book conveys the knowledge and techniques required to effectively troubleshoot and improve the
performance of automatic control systems. It clearly and concisely covers all the topics and know-how
required for being an outstanding controls practitioner. Explanations go into enough depth to make the
material understandable, but discussions are kept short so the book can serve as a reference guide.
The book shows you how to tune PID controllers more effectively in less time, and ensure long-term
loop stability. It is your complete reference for improving control-loop performance, solving process
control problems, and designing control strategies. You will refer to this guide again and again.
If you like the content of this blog, you will love the book Process Control for Practitioners!
The books 315 pages and 176 figures will help you discover how easy it is to:

Understand PID controllers, their control actions, settings, and options.

Identify process dynamics and their effects on loop performance and controller tuning.

Get the best possible performance from a control loop.

Tune controllers differently to achieve specific control objectives.

Identify the root cause (or causes) of poor control performance.

Use techniques such as linearization and gain scheduling to ensure consistent loop response
and long-term stability.

Design and optimize control strategies such as cascade, feedforward, and ratio control to
improve control performance and reduce variability.

Monitor loop performance and pinpoint control problems.

This concise manual on control-loop optimization will show you the fastest, surest, and most practical
ways to tune controllers and solve control problems.

Process Control for Practitioners is available at Amazon.com:

Stay tuned!
Jacques Smuts

Process Control for Practitioners - Front Cover

Process Control for Practitioners - Back Cover


Posted in 9. Tips and Work-Process

Testing Control Loop Performance


February 28, 2013

After tuning a control loop, it is customary to change the setpoint to test the loops performance. This will
certainly show you how the loop responds to setpoint changes, but will it tell you anything about how the
loop might respond to process disturbances? In some cases yes, but in other cases no. Let me
explain

Flow loops and liquid pressure respond similarly upon setpoint changes and disturbances. If you see
overshoot after a setpoint change, it means that you will likely get overshoot after a disturbance. The
same can be said for no overshoot (Figure 1), sluggish response, or oscillations. This similarity applies
to loops with processes that have dead times and time constants of almost the same length, such as
flow and liquid pressure.

Figure 1. A flow loop approaches its setpoint after a disturbance in the same way it does after a setpoint
change.

But many loops respond differently to setpoint changes versus process disturbances. The difference
becomes more evident when controlling processes with long time constants relative to their dead times,
such as temperature and gas pressure control loops. The difference is most obvious in level control
loops (Figure 2).

Figure 2. A level loop overshooting its setpoint after a setpoint change, but not after recovering from a
disturbance.

It is often difficult or impossible to create a process disturbance to test tuning settings. So how can we
be assess the loops disturbance response? Here is a simple sequence you can execute to simulate a
process disturbance and see how a loop will respond to it.
1.

Put the controller in manual mode.

2.

Change the controller output by a few percent (5% is normally a good starting point).

3.

Immediately put the controller back in automatic control mode.

After Step 3, the process variable and controller output will continue in a straight line for a few moments
until the change has propagated through the process dead time. The process variable will then begin to
deviate from setpoint, and the controller will react to it exactly as if it was a process disturbance (Figure
3).

Figure 3. Simulating a process disturbance by quickly changing the controller output in manual and
immediately putting the controller back in auto.

There are many loops that rarely undergo setpoint changes but frequently have to compensate for the
effect of disturbances. This method of testing is much more representative of such a loops real
operating conditions.
By the way, you can buy a license for the loop simulator in Figure 3 from OptiControls.

Stay tuned!
Jacques Smuts
Principal Consultant at OptiControls, and author of Process Control for Practitioners

Tools of the Tuner


July 8, 2013

A control loop tuner should be proficient in using a variety of tools to be effective in any tuning situation.
Customers often ask me how I tune loops, and my answer is that I use several tools depending on the
situation. Here is an overview of the tools I frequently use when analyzing and optimizing control loops.

Process Historian
Indispensable for much more than tuning, the process historian is one of my most-used tools. I use it to
check valve linearity, analyze process interactions, compare loop performance before and after tuning,
design feedforward controllers and characterizers, and to analyze the step-response of a process for
tuning the controller.
Some plants where I work have no OPC connection for retrieving real-time process data, or dont allow
installing data collection software to collect real-time process data. Then their process historian is my
only way to access plant data. I often analyze the step response for tuning purposes using the
historians user interface, but if it is easy enough to export data to Excel, I will go that route and analyze
the data using tuning software on my laptop.
For fast-responding loops, I ask the system administrator to speed up the sampling rate, because the
default sampling interval on most historians is 30 to 60 seconds, which is too slow for analyzing fast

loops. A one-second sampling interval is required for flow and liquid pressure loops, five seconds for
most other loops, while 30 to 60 seconds serve only the slowest loops.
Some control loops I work on have processes that take hours to respond. In more than half of these
cases I can go back in history and find sufficiently large operator-induced step changes that I can use
for analysis and tuning. That saves me from having to do step tests and wait hours for the process to
respond. I always try to get at least three of these step changes, but I prefer to have more if the process
models change from one step-test to the next. This saves me a lot of time on slow-responding
processes because the complete response is already in the historian. This also minimizes the need for
disturbing the process with additional step tests.

Process Historian

Excel
When I analyze step-test data directly on the historian, I use a pre-built Excel spreadsheet to simplify
the data analysis and controller tuning calculations. I take down a few readings from the historian and
enter them into the spreadsheet, and it calculates the process characteristics, and recommends tuning
settings. It supports self-regulating and integrating process types, and has Cohen-Coon, ZieglerNichols, Lambda/IMC, Dead-Time, Surge-Tank, and Level-Averaging tuning rules. It also allows me
to speed up or slow down the loop response by calculating different tuning settings, based on my tuning
objective. Every thing I need for my tuning calculations!

Excel Tuning Calculator

Loop Simulation Software


Loop Explorer is a simulation and tuning software tool that I developed to give me insights into how a
loop would respond to setpoint changes and disturbances. This is essential for obtaining optimal tuning
settings for the loops control objective. The simulator is especially handy when I use the spreadsheet to
analyze the step response, since the spreadsheet does not have its own simulator. I also use the Loop
Explorer software in my training classes to demonstrate many concepts related to process
characteristics, PID controllers, and controller tuning.

Loop Explorer Software

Tuning Software
Of course I also use commercial tuning software. I recommend that every plant who does tuning inhouse invest in good tuning software and have it accessible in every control room. If I work at a plant
that already has high-end tuning software installed, I use their software. Otherwise I use the tuning
software I have on my laptop. High-end tuning software applications analyze process response and
automatically identify process characteristics. They provide access to different tuning methods, and
render simulations of loop response with the new tuning settings. They also have databases of controller
types, so one doesnt have to deal with manually converting tuning constants to suit a specific controller.
One very important point: Tuning software is just a tool and is no substitute for understanding process
dynamics, PID controllers, and the tuning process. If you cant tune control loops by manually
determining process characteristics from step-response data, and applying an appropriate tuning rule to
calculate tuning constants, you will likely not be successful with software either.

Operator Time Trends


When I do step testing, I mostly sit right next to the operator. Then we use his/her real-time trends for
the control loop to monitor the response. When sitting next to the operator I can point to certain
anomalies, and explain why I do certain tests. It is also a great time to get to know the operator, learn
about the process he controls, and become familiar with the culture of the company.

P&ID and Operator Graphics

Before analyzing and tuning a control loop, I ask the operator to explain the process to me. He/she will
often use their operator graphics to show me the streams into and out of the process, and the location of
valves, pumps, heat exchangers etc. Process engineers will often give me a set of P&IDs that I refer to.
In several occasions I discovered that other interacting or subordinate loops have to be tuned first, or
placed in manual, before I could attend to the loop of concern. I also find flow measurements, not being
used for control, that I can trend for supplemental information on the control valves performance, or if
there might be a need for implementing cascade control.

Operator Graphic

Pen, Paper, and Calculator


And dont forget the traditional pen, paper and calculator. I find it handy and convenient to quickly draw
a diagram on paper, take notes, or to quickly run through calculations. I would often transfer my written
notes to electronic format for inclusion in my report after the days tuning, or while waiting for step-test
results on a slow loop.

Hand Calculations

Process Walk-Down

Whenever possible, I go out to the plant with an experienced operator or engineer to take a look at the
process, equipment, and physical location and condition of the control valves and instrumentation. One
time I was dealing with a vastly oversized nitrogen injection control valve that was used to control
pressure on a distillation column. The loop was completely unstable, regardless of any tuning settings
we tried. We tried making 0.1% steps in controller output with the controller in manual mode. Stepping
the controller output upwards from 1.5% to 2.4% the column pressure showed no response (no physical
change in valve opening), but at 2.5% the pressure sharply decreased. When the operator and I went
out to the valve and radioed back to the control room to repeat the test, we noticed that the valve
position bumped by about 5% instead of the 0.1% change in controller output. We would never have
known this if we did not go to the valve. After the faulty positioner was replaced we could stabilize the
loop. (However, control was still poor because the valve was grossly oversized.)

Process Walkdown

Literature
I have several really good books on process control, instrumentation, control valves, processes, PID
controllers, and tuning. Some of them are academically inclined, making them virtually useless for
tuning controllers in real plants. But some others are much more practical in nature. The latter is
obviously more suitable for practitioners. I track the sales of eight of these practical books on
amazon.com and the top seller, Process Control for Practitioners, has sold more copies over the last
two years than the next three books together.

Summary
Even though I am a big proponent of tuning software, it is not the only tool available for analyzing and
tuning control loops. It is important to consider the situation, and use the most appropriate tool or
technique for analyzing and optimizing control loops even if it comes down to doing manual
calculations on a piece of paper.

Stay tuned,
Jacques Smuts
Principal Consultant at OptiControls and author of the book Process Control for Practitioners

Posted in 1. General, 9. Tips and Work-Process

Tuning Tips How to Improve Your Results


February 9, 2011

I have read several posts on LinkedIn where the writers state that tuning rules dont work. Well, I politely
argue that it is not the rules that dont work. You have to know how to apply the rules properly and what
to expect from them. Its not rocket science, but if you miss a piece, your calculated tuning settings
might not work. So I provide this checklist with tips to give you a better understanding of whats involved
with tuning a controller. Hopefully it will improve your results.
Valve Performance
Is the control valve working properly? (See blog on control valve problems.) Dead band can severely
affect your step-test results. Stiction and positioner overshoot can cause oscillations regardless of how
well the controller is tuned.
Execute specific tests that check for dead band, stiction, and positioner overshoot in the control valve.
Step-test Procedure
Begin with a steady process variable and make a step-change large enough that the process variables
response is clearly visible above the noise/disturbance level. A good rule-of-thumb is that the process
variable must move five times as much as the peak-to-peak noise/disturbance level. You have to make
measurements, and if the signal-to-noise ratio is too low (step too small) major errors could be made. If
you cant get the process variable steady enough you may have problems elsewhere that should be
addressed first. There is aprocedure for step-testing at this blog.
Also, do multiple step-tests so that you can compare the calculations from different step tests with each
other. If you do just one step-test, you wont know if a disturbance affected the process during your test.
For most loops I do four step tests. I have had to do more than a dozen step tests on some volatile
processes to get good average values for process response. Take the time, and get good results.
Step-Test Measurements
Make sure you know how to measure process gain, dead time and time constant from a process
response curve. Do this for each step test. An Excel spreadsheet could help if you dont have tuning
software. Compare the numbers from each step test, remove outliers, take the average of the remaining
values.

Process Dynamic Response to a Step Test

Time Base
All the popular tuning rules assume that you are making time measurements in the same units as those
used by your controller. Are your controllers integral and derivative time settings in minutes or seconds?
Convert your measured values to match you integral and derivative time units, if necessary.
Scaling
All tuning rules assume the process variable and controller output measurements are normalized. That
means that changes in process variable have to be divided by the span of the measurement device, and
changes in controller output have to be divided by the span of the controller output. The latter is
normally 0% to 100%, but on some DCSs the controller output range can be different for outer loops
in cascade control to match the range of the process variable.
Tuning Rule, Robustness, and Control Objective
The Ziegler-Nichols tuning rules were designed to provide -amplitude decay, which is undesirable for
most processes. The control loop is also not very robust it can easily go unstable if the process gain or
dead time increases. These two problems can easily be solved by dividing the calculated controller gain
by two. Note that the Ziegler-Nichols tuning rules result in sluggish loops if the process dead time is
longer than the time constant. See my comments on Z-N tuning rules for more detail.
The Cohen-Coon rules were also designed to provide -amplitude decay, and have the same
robustness problem as the Ziegler-Nichols rules. These can easily be solved by dividing the calculated
controller gain by two.
Minimum IAE tuning rules give something close to -amplitude decay, and have the same robustness
problem as the Ziegler-Nichols and Cohen-Coon rules. The solution is to divide the calculated controller
gain by two.
The Chien-Hrones-Reswick (CHR) tuning rules come in two sets, one for 20% overshoot (not
recommended for most processes because of overshoot and low robustness) and a 0% overshoot rule
(which is more robust and okay to use). Note that the CHR tuning rules result in sluggish loop response
if the process dead time is longer than the time constant.
The Lambda and Internal Model Control (IMC) tuning rules give very stable response (robust control
loops), and no overshoot if applied correctly. But loops with long time constants respond sluggishly to
disturbances. See blog on Lambda tuning for more detail.
High-end tuning software should allow you to select your tuning objective, and calculate tuning settings
accordingly. High-end tuning software should also warn you when calculated settings will result in a
control loop with low robustness.
Controller Algorithm
Some tuning rules (like Ziegler-Nichols) have been developed for interactive PID algorithms, while
others (like minimum IAE) have been developed for noninteractive algorithms. There are conversions
available to go from PID settings on one type to the other. Note that if you dont use derivative (most
people dont), there is no difference between interactive and noninteractive algorithms.
See this article on controller algorithms.
A few DCSs and PLCs have parallel controller algorithms, and you have to convert your calculated
integral and derivative settings for use on a parallel algorithm.
Integrals Unit of Measure

All popular tuning rules assume your controllers integral setting is in units of time (as in minutes or
seconds), and not the inverse (as in repeats per minute or repeats per second). Invert the calculated
integral time if necessary.
Controller Gains Unit of Measure
All popular tuning rules assume your controller has controller gain, and not proportional band. Does your
controller use proportional band or gain? Convert your calculated controller gain to proportional band if
necessary.
Process Linearity
Tuning rules assume your process gain, dead time, and time constant dont ever change. If you have a
process where these numbers change significantly (increase more than 50% or decrease more than
33%), this could severely affect loop response and stability. You should consider linearizing the final
control element or measurement, or implement controller gain scheduling. (And you can schedule
integral and derivative time too its still called gain scheduling).
Tools
Your effectiveness and results will not only depend on what you know about controllers, tuning, and the
process you are controlling, but also on what tools you are using.
Tuning software automatically identifies the process model, calculates controller settings that match the
controller algorithm and its units of measure, and provides simulations of expected loop response after
tuning. This can be well worth the money you pay for a software tool.
Manual tuning (not using software) can be greatly simplified by using an Excel spreadsheet to help
calculate process model parameters and controller settings. If you are not using software, take the time
and compile a spreadsheet to help you.
Let me know if you have questions or need more information.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

When to Use which Tuning Rule


December 29, 2012

There are more than 400 tuning rules for PI and PID controllers [1]. How can one possibly choose the
best or most appropriate tuning rule from all of these? To simplify matters, the main differences between
the tuning rules can be grouped into four categories:
1.

Type of process

2.

Tuning objective

3.

Process information required

4.

Type of controller

Most of the tuning rules apply to first-order plus dead time (self-regulating) and integrator plus dead time
(integrating) process types. These two process types adequately cover the vast majority of control loops
in process plants. Other tuning rules apply to higher-order, oscillating, or unstable processes. Most of

the documented tuning rules apply only to processes with dominant time constants. This limits their
practical application. The Cohen-Coon tuning rules are an exception.
Tuning objectives include quarter-amplitude damping, minimization of some error integral, a specific
percentage overshoot, critically damped, robust tuning, and a specified closed-loop time constant. It is
rare to find a tuning rule with an adjustable tuning factor that allows you to change the speed of
response. The IMC / Lambda tuning rules are one exception.
The process information required for the tuning rules based on first-order plus dead time and integrator
plus dead time process types can be obtained by doing process step tests. A few tuning rules are based
on the ultimate cycling or relay tuning methods. Many of the academic tuning rules are based on highorder process models, but they never tell you how to obtain the process model; they just base the tuning
on some fictitious model chosen by the author, which largely makes them useless for practical
application.
Most tuning-rule authors developed tuning rules for both PI and PID controllers, but with no guidance
when to use which one. Some PID tuning rules apply to the interactive algorithm, while most apply to
the noninteractive algorithm. It is reasonably easy to convert from one type to the other.
To reduce all these complexities to something we can work with on most control loops, we can consider
two process types (self-regulating and integrating), and two tuning objectives (fast and slow or very
robust). And ideally we need an easy tuning factor to adjust the speed of response.
When to Use Which Tuning Rule
You could probably use any of the 400 tuning rules, as long as it applies to your situation. I have
successfully tuned most (but not all) control loops using just a few tuning rules. Here is what I
recommend for most loops:
For self-regulating processes, use the Cohen-Coon PI tuning rule with the following exceptions:

Use a stability margin of two or more to improve robustness and adjust speed of response.

If td > 4tau, use the tuning rule for dead-time-dominant processes.

If you find it difficult to accurately measure the dead time, use the Lambda tuning rule.

If you want the loop to have a specific speed of response, use the Lambda tuning rule.

If you want the loop to absorb disturbances rather than pass them on to the next process, use
the Lambda tuning rule with the closed loop time constant set three tomes the open loop time
constant.

Use the derivative control mode (PID tuning rule) only when you need every last bit of speed,
and then only when the process lends itself well to the use of derivative.

For integrating processes, use the Ziegler-Nichols tuning rule, except for surge tanks andlevel
averaging, where you should use the two tuning rules named after these control objectives.

Self-Regulating Process
Integrating Process
If you use a PID tuning rule and an interactive controller algorithm, or a controller with theparallel
algorithm, remember to convert the calculated tuning parameters to ones suitable for your controller
algorithm. Also remember to measure your process characteristics in the same time-units your
controllers integral uses. And remember to integral time to integral gain if that is what your controller
uses. Finally, when tuning any control loop, watch out for control valve problems.

You can find much more information in my book Process Control for Practitioners.

Stay tuned!
Jacques Smuts
Principal Consultant OptiControls

Reference
1.

ODwyer, A Summary of PI and PID Controller Tuning Rules for Processes with Time Delay.
Part 1: PI Controller Tuning Rules, Proceedings of PID 00: IFAC Workshop on Digital Control,
Terrassa, Spain, April 4-7, 2000, pp. 175-180.

Why Tuning Rules Dont Always Work


January 7, 2011

There are several reasons why PID controller tuning rules dont always work as advertised. I have
talked to several process control practitioners who tried them once or twice, but had no success
and gave up on using them as a result. Here is a list of items to consider when using tuning rules.

ZIEGLER-NICHOLS TUNING RULES


The Ziegler-Nichols are the oldest and most popular tuning rules. They developed two
methods: the Process Reaction-Curve or Open-Loop method (done with controller in manual)
and the Ultimate-Cycling or Closed-Loop method (controller in automatic).
The Ziegler-Nichols open-loop tuning rules have several drawbacks:
Issue 1: It tunes the loop for quarter-amplitude-damping response, which overshoots and oscillates
quite a bit.
Issue 2: It leaves the loop with very little robustness, which can lead to loop instability.
Issue 3: The rules give you very poor response if the process is dead-time dominant.
Issue 4: The rules are very sensitive to an accurate measurement of dead time, which is difficult on lagdominant processes with short dead times.
The Ziegler-Nichols closed-loop tuning method does a little better with issues 3 and 4 above, but issues
1 and 2 remain a problem. In addition, this method is very sensitive to control valve problems like dead
band or stiction. More info here: http://blog.opticontrols.com/archives/39
Issue 1 and 2 (and 4 to some degree) can be alleviated by using only half or less of the calculated
controller gain.

COHEN-COON and OTHERS


The Cohen-Coon and many other open loop tuning rules do much better with issue 3, but issues 1, 2
and 4 are still problems. Again, these can be resolved by detuning the controller gain.

LAMBDA TUNING RULES


Lambda tuning rules give you a stable response with no overshoot, and leave the loop with plenty of
robustness to accommodate measurement errors. So it seems like Lambda tuning rules overcome all of
the four issues listed above. However, these rules result in a very slow response to disturbances on lagdominant processes. More info here:http://blog.opticontrols.com/archives/260

CONTROLLER ALGORITHMS
If your controller algorithm and units of measure are not matched with the tuning rule you use, the
results can be undesirable, or even dangerous.
Of the PID tuning rules mentioned above, the Ziegler-Nichols rules were developed for controllers with
an interactive (series) controller algorithm, while Cohen-Coon and Lambda rules were developed for
noninteractive (a.k.a. standard or ideal) controllers. The PI tuning rules (no derivative) will work on both
interactive and non-interactive algorithms, while PID may require parameter conversion. If you detune
the controller like explained above, the difference between interactive and noninteractive algorithms
becomes much less important.
However, if you have a controller with a parallel algorithm, you definitely have to convert the calculated
settings to work with it. More info here: http://blog.opticontrols.com/archives/124

And note that most tuning rules calculate controller gain, and not proportional band. And most calculate
integral time (as in minutes or seconds), and not integral gain (as in repeats per minute or repeats per
second). Make sure you convert the tuning settings to work on your controller.
Finally, the rules assume you are making the measurements of dead time and time constant in the same
time-base used by your controllers integral and derivative settings, i.e. minutes versus seconds. If you
measure in seconds, but the controller uses minutes, you have to convert your measurements to
minutes before calculating tuning settings.

CONTROLLER SCAN INTERVAL


Tuning rules assume controllers are analog in nature that they continuously sample the process
variable and calculate an output. However, modern controllers are digital, and execute their
task intermittently at a rate called the scan interval, execution period, or something similar. Controllers
typically execute at 1-second intervals, but this could be much faster or slower, depending on the
application.
Intermittently scanning and executing at a 1-second interval is normally not a problem for slow loops like
temperature, gas pressure, level, and composition. However for fast loops like flow, liquid pressure, and
motor speed, a 1-second scan interval can add a substantial proportion of dead time to the loop. If this
extra dead time is ignored in the tuning rule, controller settings will be too aggressive, possibly leading
to oscillations and instability on fast loops.

OTHER PROBLEMS
Another reason why tuning rules fail to deliver on our expectation could be that the control valve or
damper might be defective. This includes problems like dead band and stiction. These problems can
invalidate the results you get from a step test, and affects the loop performance of even a well-tuned
loop. More information here:http://blog.opticontrols.com/archives/77

WHICH RULE TO USE


You should select the tuning rule according to the desired control objective for the loop, while
considering the constraints above:
- If you need a very stable loop that absorbs disturbances rather than passing them on, use the Lambda
tuning rules.
- If you need fast recovery from disturbances, use the Cohen Coon tuning rules, but use only half of the
rule-calculated value for controller gain to overcome issues 1 and 2. However, if the process dead-time
is very short (issue 4), or the PV is noisy and you cant measure the dead time accurately, use the
Lambda tuning rules.

TUNING SOFTWARE
High-end tuning software packages can diagnose control valve problems such as dead band and
stiction. They also let you specify the control objective and your controller type. They can then produce
appropriate tuning settings after analyzing the process response from a step-test.

Stay tuned!
Jacques Smuts Author of the book Process Control for Practitioners

Posted in 9. Tips and Work-Process

2 Responses to Why Tuning Rules Dont Always


Work

Arkadiy Turevskiy:
April 27, 2011 at 2:24 pm

Thanks for the nice overview.


When designing controllers (any controllers in general, including PID controllers speciically), it
is a good idea to create a good plant model and to simulate your design in software before
trying it on the actual process.
Here is a page we put together with a comprehensive set of resources for tuning, simulating,
and implementing PID controllers in MATLAB and Simulink:
http://www.mathworks.com/discovery/pid-control.html

Jacques:
April 27, 2011 at 2:54 pm
Arkadiy, I totally agree with you. Without a process model, youre tuning in the dark, and
simulations help you see the effect of your changes before actually implementing them in the
process. While aerospace and other high-tech industries often use more complex models and
controllers, for most industrial processes, a simple first order + dead time model and PI or PID
control are often sufficient.

S-ar putea să vă placă și