Sunteți pe pagina 1din 9

Copyright 2005 by ISA.

Presented at ISA EXPO 2005, 25-27 October 2005


McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org
Optimizing Fieldbus Link Schedules Makes a Difference!



William R. Hodson, P.E.
Engineering Fellow
Honeywell Process Solutions
1100 Virginia Drive
Fort Washington, PA 19034 USA


KEYWORDS

Fieldbus, Link Schedule, Optimization, Open Communications Standards, Performance,
Macrocycle


ABSTRACT

FOUNDATION Fieldbus (FF) protocol requires that schedules be generated for the synchronization of
the link and all its interoperating components, in particular function block execution timing and
coordinated timings of the publications of function block outputs. However, the FF protocol
specifications do not require that the schedules be optimized in any way. This paper examines
common weaknesses in link schedule building and proposes methods to optimize them. The author
goes on to define metrics to evaluate the link scheduling improvements numerically. Ultimately,
improvements lead to decreased latency, better control, increased utilization of a link, and faster
responses to demanded operations such as associated display call-ups.


INTRODUCTION

The Fieldbus Foundation celebrated its tenth anniversary last year. In those ten years, much has been
accomplished. The technology has stabilized and acceptance is growing, although at a slower rate than
anticipated, primarily due to a slow economy and shortage of green field projects. Fieldbus is not
usually the best candidate for piecemeal replacements of conventional equipment.

Initial projects often started as pilot plant trials or small expansions, just to try it out. Projects have
been growing and have reached the 13,000 FF devices-per-project level now. With such size,
pressures on improving performance is increasing. One way to improve performance is to optimize the
Fieldbus link schedules. This can bring better control, more devices per link, and better performance
for related displays and applications.


contents
Copyright 2005 by ISA.
Presented at ISA EXPO 2005, 25-27 October 2005
McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org
BACKGROUND

Before describing benefits and opportunities for link schedule optimization, one must understand the
basics of the Fieldbus link schedule. Lets start with a relatively simple configuration and its schedule.
See Figure 1.

There is a transmitter with an Analog Input (AI) function block and a PID control function block and a
valve with an Analog Output (AO) function block on the same link. A configuration tool schedules the
AI blocks execution to occur first in the cycle, called the macrocycle. (Timing attributes needed for
the schedule are attained from the device vendors device description files.) Then the PID block is
scheduled immediately next, accepting the AI blocks output internally, with no need to publish it.
When the PID is done executing, it is scheduled to publish its control output, OUT. The valve will
subscribe to it and the AO block will be scheduled to execute shortly after the publication is received.
The AO block must publish its back-calculation output, BKCAL_OUT so that its upstream block (the
PID) can confirm its operating status and use it for initialization if needed on the next cycle.

Time synchronization is tightly controlled on the link
since all devices use a common knowledge of time.
Little margin is needed in the schedule to allow for
time variances between devices. This example, of
course, is intentionally simple. If this is all there was
on the link, there would be nothing to improve. There
are only a couple basic rules to keep in mind for the
schedule: (1) Only one publication can be scheduled
on a link at a time and (2) only one function block may
execute at a time within a given device.


PERFORMANCE OPTIMIZATION OPPORTUNITIES

Lets see what happens when one adds a second loop
to the link. Figure 2 shows the common schedule
which simply adds another staircase to the schedule.
For each loop, there is no opportunity to reduce
latency (wasted time between executions and
publications) since there is no wasted time between AI
and AO block executions in each loop. However, this
case shows that the schedule is needlessly consuming
macrocycle time because the additional devices are
permitted to have their function blocks execute in
parallel with the other devices. They simply cannot
publish together. This presents the first optimization
opportunity:


AI
PID
PID-[OUT]
AO
AO-[BKCAL_OUT]
Macrocycle Time

FIG. 1
An elementary link schedule showing a
schedule for a transmitter with an AI and PID
block and a valve with an AO block.
AI1
PID1
PID1-[OUT]
AO1
AO1-[BK_CAL_OUT]
Macrocycle Time
AI2
PID2
PID2-[OUT]
AO2
AO2-[BKCAL_OUT]

FIG. 2
Adding a second loop may double the scheduled
portion of the macrocycle.
Copyright 2005 by ISA.
Presented at ISA EXPO 2005, 25-27 October 2005
McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org
1. Maximize the usability of the macrocycle by reducing the fraction of the macrocycle
scheduled by taking advantage of parallel execution of blocks whenever possible.

Imagine if there were 4 to 6 loops on the link. The macrocycle
duration may have to be significantly extended, reducing the
controllability of the loops or else fewer loops could be used on a
given link. Since link interfaces cost money, the more devices-
per-link, the lower the interface cost-per-device. This desire
will be moderated by scale of loss concerns and other
bandwidth limiting factors. Figure 3 shows that the scheduled
portion of the macrocycle can be reduce almost in half in this
case. Leaving a significant portion of the macrocycle free of
publications permits the system to perform other network
functions like parameter access more rapidly, resulting in faster
display call-ups.

Figure 3 also reveals another benefit over Figure 2. Since the
publications are bunched together instead of spread out, more bandwidth will be available on the link
for other, longer messages. Fieldbus stacks look ahead at the schedule and will not initiate an
unscheduled message if it does not have time to complete because
of an impending scheduled publication. In other words, it is better
to bunch up publications consecutively, than to spread them out
like a picket fence. The holes in the fence increase performance.
The bigger the hole, the better. So the second optimization
opportunity is:

2. Maximize the availability of the macrocycle for
unscheduled communications by scheduling publications
consecutively or close together whenever possible.

Figure 4 illustrates another opportunity. Here there are three
transmitters measuring process values and the valve employing an
input selector (ISEL) block to average or select the median. An
ISEL, PID, and AO block reside in the valve.

But Figure 5 shows that by taking advantage of parallel execution
of the three transmitters (simply skewed enough to permit their
three publications to be adjacent to one another) the total latency
from the execution of the first AI to the final AO is reduced.
Reducing latency can improve control. Control stability issues are
reduced. Actuator movement, hence, wear-and-tear, can be
reduced. Productivity can be increased if the reduced latency leads
to tighter control which leads to the ability to operate closer to
equipment design limits. In addition to parallel processing,
interference by scheduling other devices can add latency to the
AI1
PID1
PID1-[OUT]
AO1
AO1-[BK_CAL_OUT]
Macrocycle Time
AI2
PID2
PID2-[OUT]
AO2
AO2-[BKCAL_OUT]

FIG. 3
The schedule is reduced via parallel
execution and grouping of
publications.
AI1
AI1-[OUT]
AI2
AI2-[OUT]
AI3
Macrocycle Time
AI3-[OUT]
ISEL
PID2-[OUT]
AO

FIG. 4
Three transmitters sample in series
and provide values for an input
selector, PID, and AO.
AI1
AI1-[OUT]
AI2
AI2-[OUT]
AI3
Macrocycle Time
AI3-[OUT]
ISEL
PID2-[OUT]
AO

FIG. 5
Parallel execution can reduce the
latency for a portion of the control
loop.
Copyright 2005 by ISA.
Presented at ISA EXPO 2005, 25-27 October 2005
McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org
AI1
AI2
AI3 Arith
metic
PID1
AO
AI4
PID2
PID3 Integ-
rator
2s period
1s period
0.5s period 0 5s period
Dotted lines indicate
function blocks within
a single device.
SP (CAS IN)
PV (IN)
SP (CAS IN)
PV (IN)
FIG. 6
This triple cascade includes an
arithmetic calculation and an integrator
control loop. This is well addressed by prioritizing the
elements in the schedule such that the most important ones
are scheduled with the fewest possible conflicts. Hence, the
third optimization opportunity is:

3. Minimize control latency via parallel execution and
prioritized scheduling whenever possible.

Now lets look at a more complex example. Figure 6 shows
a tertiary loop consisting of AI3, Arithmetic, PID3, and AO
blocks. It executes at a one-half second period. AI4 feeds the
calculation in the Arithmetic block, but at two seconds. (It is
associated with a slower moving variable such as
temperature.) A secondary sequence of AI2 and PID2
calculate the setpoint for PID3 at a one second period. The primary sequence of AI1 and PID1
calculate the setpoint for PID2 at a two second period. Clearly the most important elements to
schedule with minimum latency are the AI3-Arithemetic-PID3-AO blocks. The inner-most loop of a
cascade has the tightest timing requirements. The integrator block feeds on the output of the
Arithmetic block, but since its output is not used in the control loop, its timing is much less important
than any blocks used for control.

The non-optimized or natural schedule is shown in Figure 7. It basically lays the elements into the
schedule as they are encountered without regard to optimization. Since there are multiple execution
periods for the blocks, the two second macrocycle repeats the one-half second blocks four times, the
one second blocks twice, and the two second blocks appear once. This schedule consumes 499ms of
the 500ms available for the first 500ms sub-schedule, so it would not even be acceptable, not allowing
any time for unscheduled client-server communications to occur. (A guideline of 50% free time is
sometimes used.) The figure also shows, using vertical bands, the times that unscheduled traffic
cannot use the bus. Also, there is considerable unnecessary latency in the schedule between the input
processing and the control action.

Figure 8 is an optimized schedule, taking into account priorities and other factors. First, notice that
the initial 500ms sub-cycle is now only scheduled for a 283ms duration, instead of 499ms, leaving
more unscheduled time. Next, notice that the vertical bands showing publications are better grouped,
also providing for better unscheduled communications bandwidth. Back-calculations are grouped with
other publications where possible. Latency from input to output is significantly reduced by priority
scheduling of the elements into the schedule.

The fourth optimization opttortunity is:

4. Provide the best scheduling for the highest priority elements.

Another opportunity for optimization comes with the back-calculation publications. These
publications notify the upstream control block of current values, limits, cascade relationship
Copyright 2005 by ISA.
Presented at ISA EXPO 2005, 25-27 October 2005
McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org
time
2000
ms
1600
ms
1200
ms
400
ms
800
ms
0
ms
2

s
e
c
1

s
e
c
.
5

s
e
c
4
9
9
4
9
9
Consumed
Bandwidth
Consumed
Bandwidth
Available
Bandwidth
Available
Bandwidth
E
x
e
c
u
t
i
o
n
s

&

P
u
b
l
i
c
a
t
i
o
n
s
AO exec
PID3-BK PUB
PID3 exec
Integrat exec
Arith exec
AI3 PUB
AI3 exec
PID2-BK PUB
PID2 PUB
PID2 exec
AI2 exec
AI4 PUB
AI4 exec
PID1 PUB
PID1 exec
AI1 exec
time
2000
ms
1600
ms
1200
ms
400
ms
800
ms
0
ms
Consumed
Bandwidth
Consumed
Bandwidth
Available
Bandwidth
Available
Bandwidth
2
8
3
2
8
3
2

s
e
c
1

s
e
c
0
.
5

s
E
x
e
c
u
t
i
o
n
s

&

P
u
b
l
i
c
a
t
i
o
n
s
AO exec
PID3-BK PUB
PID3 exec
Integrat exec
Arith exec
AI3 PUB
AI3 exec
PID2-BK PUB
PID2 PUB
PID2 exec
AI2 exec
AI4 PUB
AI4 exec
PID1 PUB
PID1 exec
AI1 exec
FIG. 7
The natural schedule for the triple cascade consumes most of the macrocycle and leaves little room
for unscheduled communications.
FIG. 8
The optimized schedule for the triple cascade provides much more time for unscheduled
communications and minimizes latency from inputs to output.
Copyright 2005 by ISA.
Presented at ISA EXPO 2005, 25-27 October 2005
McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org

indications, and data qualities. Since they are available as soon as the downstream block executes, but
are not needed until the upstream block executes on the next cycle, there is usually a great deal of slack
regarding where they are to be scheduled. Since a goal of the schedule optimization is to provide the
largest breaks between publications for the unscheduled traffic, placing each additional publication in
the schedule adjacent to a previously scheduled publication is less disruptive than breaking an
unscheduled time interval into two intervals. But if it is necessary to break an unscheduled time
interval into two intervals, break the smallest eligble one and break it as unequally as possible. This is
generally preferable to breaking it equally, because long messsages can block the communications
stack waiting for a long enough interval in the schedule before being transmitted.

So the fifth optimization opportunity is:

5. Scheduling of back-calculation publications can take advantage of a wider time window of
eligibility and should minimize disruption to the availability of large quiescent
communications intervals in the schedule.

If the upstream block that receives the back-calculation publication executes at a slower period, the
downstream publication need not be published after each of its more frequent executions. It need only
be published on the final execution that precedes the scheduled execution of the upstream receiving
block.


PRIORITIZATION OF SCHEDULE ELEMENTS

When building the schedule, it is important to give the best positions to the most important
publication and execution elements. Since minimization of latency is an important goal, all schedule
elements involved with control are more important that those that are involved only with monitoring.
This means that the scheduling mechanism needs to understand how each function block is used.

Next, where there are multiple execution periods on the same link, higher priority is given to the faster
blocks. It is assumed that since the control engineer specified them to execute more frequently, they
are associated with an application function that demands faster action.

Where there are multiple loops executing at the same execution period, those that are associated with
the inner most loop of cascades are assigned the highest priority because those dynamics are always
faster. Then the next level of cascade setpoint calculation sequence is given a lower priority. This
reduced prioritization continues to the outermost level of a cascade.

If all else is equal (control vs. monitoring, execution period, level of cascade), then priority is given to
the sequences that are going to contribute the most to the schedule, since they need the least
interference. The smaller contributions can deal better with the conflicts presented by the preceding
contributions.


Copyright 2005 by ISA.
Presented at ISA EXPO 2005, 25-27 October 2005
McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org

MANUAL OPTIMIZATION

Herein is presented a set of principles that, if followed, lead to an optimized link schedule for Fieldbus
devices. Is it sufficient for a properly trained control engineer simply to optimize each link manually?
You are welcome to try! The first examples were trivial, just to point out the basic principles. The last
example was somewhat more challenging and rather difficult and time consuming to schedule
manually even though it involved only four devices. More complex schedules incorporating a dozen
or more devices can be quite challenging to manually optimize.

A plant with 21,000 FF devices that averages seven devices per link, would have 3,000 links to
optimize. If an engineer could optimize them in only 20 minutes each, it would take a thousand hours.
If it would take an hour to optimize each link, a total of three thousand hours (one and a half
engineering staff-years) would be required. The benefits of an automatic optimization algorithm for
FOUNDATION Fieldbus link schedules then become clearer.


METRICS DEFINITIONS

These optimization techniques appear beneficial, but how does one know just how much gain one has
achieved? In order to understand the gain in each characteristic of concern, one needs to determine a
metric for comparison. Since there is no industry standard for optimization gain, this paper will
propose some.

Macrocycle Utilization Gain (MUG) One of the goals was to reduce the scheduled portion of the
macrocycle using parallel execution. If one takes the ratio of the duration of the scheduled portion of
the optimized schedule to the duration of the scheduled portion of the natural, non-optimized schedule
and subtracts that from unity, the difference is the fraction of the original schedule that has been
recovered. Multiplying by 100% converts that to a percentage gain. So Macrocycle Utilization Gain
can be computed as:

% 100
_ _
_ _
1

=
duration scheduled natural
duration scheduled optimized
MUG

A 25% reduction in the scheduled portion of the macrocycle would be considered good. A 40%
reduction would be great! A 100% reduction, of course, would not be possible.

Latency Improvement Factor (LIF) Another goal was to reduce scheduled latencies, unneeded
delays occurring between the input processing and the output processing of a control sequence. If one
takes the ratio of the optimized control sequence duration to the natural control sequence duration and
subtracts that from unity, the difference is the fraction of the original sequence duration that has been
recovered. Multiplying by 100% converts that to a percentage gain. So Latency Improvement Factor
can be computed as:

Copyright 2005 by ISA.
Presented at ISA EXPO 2005, 25-27 October 2005
McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org
% 100
_ _
_ _
1

=
duration sequence natural
duration sequence optimized
LIF

A 20% reduction in the sequence length of thee macrocycle would be good. A 100% reduction, of
course, would not be possible.

An alternate method of computation would be to determine the non-essential portions of latency and
then calculate what percentage of that number can be eliminated by optimization. That would result in
higher numbers and directly indicate the effectiveness of the algorithm to eliminate the non-essential
portions. However, the above suggested measure is more indicative of the gain to the control loop
itself, retaining the essential execution and publication times as part of the calculation.

Publication Gap Availability Improvement (PGAI) The third goal was to locate publications in
the schedule to increase the gaps between publications so that larger gaps are available for unscheduled
communications to support other needs such as display service and downloads. This is not a simple
metric to determine, because it is not linear. For example, the sum of gap times before and after the
optimization is usually identical, because the publications have only been moved by optimization, not
eliminated (except for certain back-calculation publications in multi-period schedules). Gaps of less
than 20 milliseconds are unlikely to be used. Gaps of more than 50 milliseconds are quite likely to be
used. In between 20 and 50 milliseconds, the probability of usage is dependent on the length of the
client-server message at the top of the queue. A reasonable approximation to the gain can be attained
by taking the ratio of the sum of the squares of the natural qualifying publication gaps to the sum of
the squares of the optimized qualifying publication gaps. (Qualifying means the publication gaps
of greater than 20 millisecond duration because small gaps are unusable.) Subtract that from unity and
multiply by 100% to estimate the Publication Gap Availability Improvement:

( )
( )
% 100
_ _ _
_ _ _
1
2
2

pubs qual opt length


pubs qual natural length
PGAI


METRICS OF THE EXAMPLES

One can evaluate the gains on the examples presented earlier. Returning to Figure 3s optimization of
Figure 2s schedule wherein function blocks were executed in parallel where possible, the Macrocycle
Utilization Gain is 46% and the Publication Gap Availability Improvement is 32%. The Latency
Improvement Factor is 0% because there was no unnecessary latency in the sequence to eliminate.

In the triple transmitter example shown in Figures 4 and 5, the Macrocycle Utilization Gain is 22%, the
Latency Improvement Factor is 22%, and the Publication Gap Availability Improvement is 12%.

More impressive gains are often associated with more complex schedules caused by more devices.
The triple cascade example of Figures 6, 7, and 8 netted a Macrocycle Utilization Gain of 43%, a
Copyright 2005 by ISA.
Presented at ISA EXPO 2005, 25-27 October 2005
McCormick Place Lakeside Center, Chicago, Illinois, www.isa.org
Latency Improvement Factor of 21%, and a Publication Gap Availability Improvement of 45% when
optimized.


SUMMARY

As FOUNDATION Fieldbus installations grow in popularly, users desire to place more devices on a link
in order to share the cost of the interface. The link schedule is often a limiting factor to the number of
devices on a link because each device publication must be scheduled in synchronization with function
block executions. By optimizing the link schedule, the user is able to use the additional bandwidth to
add more devices or to improve the performance of the link for other communications uses such as
display call-ups. The effect of potential latency reduction in control loops serves to provide better
control and less wear-and-tear on end-elements as control is improved.


REFERENCES

1. Fieldbus Foundation Technical Overview FOUNDATION Fieldbus, FD-043.


* * *

S-ar putea să vă placă și