Sunteți pe pagina 1din 40

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/325673969

Application of cognitive ergonomics to the control room design of advanced


technologies

Article  in  International Journal of Cognitive Ergonomics · June 1998

CITATIONS READS

2 657

2 authors:

Tom Kontogiannis Erik Hollnagel


Technical University of Crete University of Southern Denmark
72 PUBLICATIONS   1,056 CITATIONS    331 PUBLICATIONS   12,198 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Right First Time View project

Ph.d. on development of a systematic method to identify leading indicators View project

All content following this page was uploaded by Tom Kontogiannis on 30 June 2018.

The user has requested enhancement of the downloaded file.


APPLICATION OF COGNITIVE ERGONOMICS TO CONTROL
ROOM DESIGN OF ADVANCED TECHNOLOGIES

Tom Kontogiannis Erik Hollnagel

Ergonomics Laboratory OECD Halden Reactor Project

Dept. of Production Engineering, P. O. Box 173

Technical University of Crete, Chania N-1751 Halden

GR-73100, Greece Norway

Abstract

The evaluation of control room design is usually a combination of checking the


compliance with established human factors guidelines and assessing the allocation of
functions. Although this addresses many generally recognised problems, the
decompositional nature of the evaluation neglects the dependencies between various
design aspects and the interaction between context and cognition. In particular, the issue of
the appropriate level of automation cannot be determined by a classical human factors
evaluation. In order to perform an evaluation that corresponds to the principles of
cognitive ergonomics, it is necessary to account for how specific design solutions affect
the dynamic interaction between humans and machines and the way in which cognition
depends on the context of the task. This can be achieved by systematically describing the
dependencies between various design aspects of information presentation and assist this
effort with a qualitative modelling layer. This paper describes a framework which assesses
the cognitive demands resulting from the application of information technology and,
hence, serves as a basis for design decisions concerning specific interface issues, task
allocation, and level of automation.

KEYWORDS: Control room design, information technology, joint cognitive systems,


computer assistance, cognitive ergonomics.

6/30/2018 Page 1
1. INTRODUCTION

The early nineties have witnessed the introduction of advanced technologies in the control
rooms of complex systems. In the nuclear industry, for instance, third-generation
technologies made it possible for operators to supervise the process through compact
workstations with enhanced graphic and sound capabilities (e.g., “soft” controls, multi-
function buttons, and auditory feedback) and through computerised procedures and
intelligent aids. In a similar way, the emergence of “glass-cockpits” and flight management
systems have marked modern aircraft flightdecks. However, a number of studies on human
interaction with advanced technologies (e.g., Billings, 1991; Mossier et al., 1992; Woods
et al., 1994; Hopkin, 1995) have indicated that the postulated benefits of increased
reliability, maintainability, and quality of service have not yet completely materialised.

One of the main reasons for the deficiencies in the coupling of humans and new
technologies is that the operating environment associated with advanced systems is very
different from that of conventional control rooms. System designers have not anticipated
the changes in cognitive demands introduced by advanced technologies. As O’Hara & Hall
(1992) remarked “with the increased use of computer-based interfaces, cognitive issues are
emerging as more significant than the physical and ergonomic considerations that
dominated the design of conventional control rooms”. Developments in modern
technology are likely to affect the role of operators and their teams, the requirements about
knowledge and skills, and the way operators interact with the system. In addition, changes
in broader job factors, such as, operator licensing, training, and team work are likely to
take place.

These issues are now receiving wider recognition by many regulatory authorities
and research is in progress to review existing guidelines for control room design and the
introduction of advanced technology in general (e.g., Carter, 1992; O’Hara et al., 1995). In
view of these changes in cognitive and collaborative demands, cognitive ergonomics has
to play a crucial role in matching technological options and user demands. It is necessary
therefore to develop design guidelines that would address the interaction with new
technology in the context of new cognitive and collaborative demands.

An irony of automation, according to Norman (1990), is that our current problem


with automation is not that it is too powerful, but that it is not powerful enough. In a
6/30/2018 Page 2
similar context, Woods et al. (1990) argued that automation, at its current level of
development, lacks the conceptual power to anticipate operator responses, to provide
satisfactory accounts of its goals, to share hypotheses about the state of the world, and to
re-direct its line of reasoning in cases of misperception of the nature of the problem. To
make automation more “intelligent” and “co-operative” it is necessary to examine the
nature of cognition in complex worlds and its coupling with the wider context of work
(e.g., changes in staff qualifications, team roles, training and computerised support tools).

This paper describes a framework that can be used to assess the cognitive demands
resulting from the application of information technology and, hence, serves as a basis for
design decisions concerning specific interface issues, task allocation, and level of
automation. A qualitative modelling layer is proposed based on a number of theoretical
postulates that concern the interactions and dependencies between various design features
and job factors as well as the coupling between cognition and context.

2. APPROACHES TO DESIGN AND EVALUATION

The design of Man-Machine Interaction suffers from the lack of a methodology or


discipline corresponding to what can be found, for instance in the field of software
engineering, although not for the lack of trying (Dowell & Long, 1989). Design can be
defined as the art (or skill) of matching constraints with possibilities, as shown in Figure 1.
Without task constraints, the design can be anything that the system or the technological
options allow. If there are too many task constraints, then possibly there are no options that
match them. If either constraints or possibilities are ill-defined, design becomes an
unwieldy process.

The designer's problem is illustrated by the situation where two (or more) options
are available to solve a specific problem, e.g., displaying information about the state of a
sub-system. Design criteria for option selection can be based on user requirements which,
however, can be defined at different levels of the man-machine interaction. At one level,
design criteria can include ease of interface use, detection of abnormal events, consistency
of displays and controls, and so on. At the level of operator tasks, requirements may
concern the content and format of information so that only relevant information is
displayed, reminders are provided for incomplete tasks, and relationships between process
6/30/2018 Page 3
variables are made explicit to operators. Yet at another level, requirements can be made in
the context of training and team work, such as, compatibility with operator strategies and
information needs, minimisation of training needs, and transparency of operator actions to
other team members. The definition of user requirements and the provision of matching
rules to technological options is an intricate design problem.

The specification of user requirements should be based on a “structured” analysis


of operator tasks, cognitive processes and team work (Figure 1). In classical ergonomics
the analysis has focused on overt aspects of behaviour using methods of task analysis, such
as, operational sequence diagrams, timelines, and link analysis (Meister, 1985). The
problem with these types of analysis is that they cannot capture the cognitive demands
made by the interaction of people with advanced technology. Methods of cognitive task
analysis based on cognitive ergonomics (cf. Section 2.2) need to supplement the already
existing types of analysis.

Figure 1. The main elements in a design method

The development of design guidelines should serve to improve the quality and consistency
of designs by providing the designer with a clearly delineated set of alternatives together
with well-defined design criteria. Another reason that makes this ideal hard to attain
relates to problems of usability. Accumulated knowledge of human factors has produced a
large number of detailed design guidelines, but to be useful it must be easy to identify the
guidelines that are relevant for a given problem without having to search through them all.
This, paradoxically, mirrors the issues of functionality and usability that the guidelines
should help to resolve. Functionality refers to the number of functions a specific system is
able to perform. Usability refers to the functions that the user either is aware of or is able
to use efficiently. At present the functionality of many systems, in particular software
systems, is much higher than their usability. Good design guidelines and usability
engineering should help to reduce this discrepancy. Unfortunately, the design guidelines
themselves suffer from the same problem: high functionality and low usability. The
principles and criteria that are the basis for design should also serve as the basis for

6/30/2018 Page 4
evaluation1 (Figure 1 ). Unfortunately, evaluations suffer from the same weaknesses as
design, that is, it is hard to find well-defined principles that can be used for evaluation.

The definitions of user requirements and matching rules have given rise to different
approaches to control room design. In the traditional, technology-driven approach, the
design was approached from a strictly engineering point of view while human factors
considerations were left for the final stage to overcome certain design problems. In other
words, the interaction between people and technology was seen as an additional aspect of
system design. This approach has been criticised for producing designs that were
expensive to modify, reducing user acceptability, and generating excessive training
problems (Algera et al., 1989; Bainbridge, 1991a). The following section discusses two
other approaches based on classical and cognitive ergonomics, respectively, which appear
to be more promising.

2.1 Classical Ergonomics

Incorporating human factors at the starting point of control room design and matching
design to user capabilities and limitations have been well-established principles of the
classical ergonomics approach. To understand the concepts and methods used in this
approach, typical of the period until the end of the 1970s, we need to examine the
fundamental principles of classical ergonomics that can be reviewed as follows:

 Human capacity for learning and adaptation is insufficient to meet the


technological demands. This principle was a simple statement of fact. Technology
had made it possible to produce machines of all kinds that were faster and more
complex, hence increased requirements to control. Humans, on the other hand,
remained the same.

 Human capacity for processing information is decisive for the capacity of the
joint system. The emergence of ergonomics coincided with the development of the
modern digital computer, and the analogy between the computer and the human
mind appeared to offer some fundamental explanations of human functional

1 It goes without saying that the evaluation must be a concurrent process during the design, but for the
sake of the discussion it is assumed that it takes place as a separate activity.

6/30/2018 Page 5
characteristics by referring to hypothetical information processing structures. This
therefore became the leading metaphor for several decades.

 The human operator can be described as an optimal control system or as an


information processing system. Machines and processes had long been described
in terms of flows of information and control, and the introduction of information
processing psychology made it possible to describe the operator in the same way.
This held the promise of making classical ergonomics an exact science that could
gain the respect of engineers.

Generally speaking the solution advocated by classical ergonomics was to match carefully
the interface to known human capacities and limitations. This was done by observing the
following principles:

 Physical display characteristics should be matched to human sensory channels keeping


signals well above threshold and minimising “noise”.
 Display coding should be adjusted to natural or normally acquired perceptual structures
and to human memory capacity, particularly short-term memory.
 Controls should be made to suit the size, shape, strength and precision of limb
movements of the normal user population.
 The pattern of control manipulations should be selected in conformity with normal or
specially acquired motor skills, such as, eye-hand manipulation, typing and speech.
 Immediately displayed responses to operator's control actions (control “feel”
verification) should be made to agree with normal human kinaesthetic and other local
feedback modes.

The classical ergonomic approach to system design is epitomised by the so-called Fitts’
List (Fitts, 1951) which was developed to allocate functions in an air-traffic control
system. The principle is that a list or table is constructed of the strong and weak features of
humans and machines, which then is used as a basis for assigning functions and
responsibilities to the various system components. The problem with this approach is that
training and design decisions are made to increase the competence of human and machine
elements in isolation The two elements are seen as working independently or in parallel
rather than jointly. In this sense, this approach fails to address how humans can build upon

6/30/2018 Page 6
information processing work and decisions made by machines. As Langlotz & Shortliffe
(1983) remarked “excellent decision making performance (on the part of machines) does
not guarantee user acceptance”. Operators need to cross-check the reliability of machine
strategies and be able to build upon it rather than repeat the same process independently.
Classical ergonomics has not addressed these issues, partly because technology had not
achieved an adequate level of sophistication until early nineties.

Classical ergonomics made considerable advances in understanding the nature of


man-machine interaction and the principles have in many ways become the psychological
“common sense” knowledge of the engineering domain. The continued change in the
nature of work and the increasing recognition of the importance of cognition has, however,
created a need to adjust the principles of classical ergonomics that remained confined by
the information processing metaphor.

2.2 Cognitive Ergonomics

In the 1980's ergonomics came under the influences of cognitive psychology and Cognitive
Systems Engineering to produce what is commonly referred to as cognitive ergonomics.
Acknowledging the significance of human-computer interaction in almost all situations of
interest, a definition might be that cognitive ergonomics is system design with the
characteristics of the joint cognitive system (the operator and the computer) as a
frame of reference (cf. Hollnagel & Woods, 1983). The joint cognitive systems paradigm
is illustrated in Figure 2.

Figure 2. The man-machine system in cognitive ergonomics

The view of cognitive ergonomics, as shown in Figure 2, points to a number of critical


issues in man-machine system design that have not been treated by classical ergonomics.

 With sophisticated automation and computerised support, operators may not always
know about the current goals, or not even be in control of the process. The goal may
nominally be given to the operator, but in many cases the operator can do nothing
but accept it. Determining the extent to which goals should be shared between the

6/30/2018 Page 7
operator and the automatic control system is a crucial problem for cognitive
ergonomics.

 If operators are in control, they clearly need sufficient information to provide the
required degree of control. In cases where a smaller or larger part of the process is
assigned to the automatic control system, the operator will need enough information
to follow what is happening. Determining just how much information operators
need - both when things are normal and when the automatic control systems fail - is
another critical issue for cognitive ergonomics.

 If the automatic system is in control, the operator may at times be isolated from the
operation of the system. The reason is usually that operators cannot react quickly
enough or precisely enough or cannot process the necessary information fast enough
to make the correct choices. Determining when control should be with the operator
and when it should be with the automatic control system is a challenge for cognitive
ergonomics.

Classical ergonomics differed from Taylorism by emphasising that humans worked with
machines rather than as parts of them. Cognitive ergonomics went a step further by
emphasising that humans sometimes work through machines rather than with machines.
This means that the operator interacts with the process through the automatic control
system and through the interface. The operator’s understanding of the process therefore
becomes contingent upon how it is represented in the interface.

Cognitive ergonomics can also be seen as describing a change from closed loop to
open loop control. In classical ergonomics the operator was seen as part of a closed control
loop. The concern was therefore to provide information and controls that enabled him to
function effectively as part of the loop. In cognitive ergonomics many of the closed loop
functions are retained, but in addition the operator also has to function in an open loop. For
instance, when a diagnosis is made or a strategy is developed, the operators basically

engage in open loop control.2 It is therefore necessary to support the open loop control by
appropriate design of information presentation, task structuring, and function allocation.

2 This is, for instance, demonstrated by the fact that people may sometimes follow an incorrect
diagnosis in spite of conflicting information; feedback is thus neglected

6/30/2018 Page 8
To achieve this objective we need to examine not only the tasks that operators
perform in running the plant but also the cues and relationships they are perceiving, the
knowledge they are using, and the strategies they are choosing. Behavioural task analysis
(e.g., operational sequence diagrams, timelines, link analysis) focuses on what the
operator does, and not on how or why. Key judgements about the situation and decision
making tend to be lost or de-emphasised in the details of task decomposition. Cognitive
ergonomics can make a significant contribution to the analysis of cognitive processes in
controlling complex systems. Various methods of cognitive task analysis (e.g., Hollnagel,
1991; Hoc & Samurcay, 1992; Redding & Seamster, 1994; Klein et al., 1997) have already
been developed to examine different types of operator knowledge and strategies used to
interpret situations or make decisions how to chose a course of action.

Many cognitive task analysis methods use interviews to probe the cues and patterns
that operators are using; on other occasions, simulated scenarios are used in which
operators either think aloud while doing their jobs or are interviewed immediately
afterwards to investigate their strategies. Data from these sources can be combined to
produce a record of participant data acquisition, situation assessment, knowledge
activation, intentions, and tasks carried out in a particular scenario. The main objective of
cognitive task analysis methods should be to understand human behaviour from the point
of view of the operator rather than the point of view of the omniscient observer, in order to
reduce difficulties caused by hindsight bias. Woods (1993) quotes two challenges to
validity in the development of cognitive task analysis methods, that is, (a) to what extent
does the method changes the characteristics and context of the investigated tasks, and (b)
to what extent do the obtained data reflect the underlying cognitive processes, minimising
omissions of important aspects, intrusions of irrelevant features, or distortions of the actual
processes. Further developments in cognitive task analysis would have to cope with these
challenges in order to advance our understanding of cognitive processes in controlling
complex systems.

The agenda for cognitive ergonomics is therefore:

 to predict the situations where problems may arise, particularly problems that
involve the operator’s understanding of the situation.

6/30/2018 Page 9
 to describe the conditions that may either be the cause of the problems or have a
significant effect on how the situations develop.
 to prescribe the means by which such situations can either be avoided or their
impact reduced.
In order to do this, it is necessary to apply concepts about human cognition and theories
about how the human mind works. Yet the development of such concepts and theories is a
task for cognitive psychology and Cognitive Systems Engineering rather than cognitive
ergonomics. In order to comply with this distinction, cognitive ergonomics would need a
well-developed theoretical foundation. Since this is not yet the case (Bainbridge, 1991b),
work in cognitive ergonomics cannot avoid to address issues of theoretical development as
well as of application.

3. DESIGN AND EVALUATION ACCORDING TO COGNITIVE


ERGONOMICS

The introduction of advanced technology does not automatically and magically enhance
human performance and reliability. To ensure this, it is necessary to evaluate the proposed
design in terms of the potential negative as well as positive effects. Classical ergonomics
has over the years produced a substantial set of principles and methods to evaluate specific
aspects of control room design, although mainly confined to static evaluations. Cognitive
ergonomics enhances these principles and methods by providing a different perspective. In
this section, we present a set of assumptions based on cognitive ergonomics and Cognitive
Systems Engineering that constitute an amended basis for the design and evaluation of
new technologies.

 Intentional - proactive behaviour. Human behaviour is conceived to be intentional as


well as re-active. Rather than being considered as a passive element of the system,
which can be treated in the same way as other components, the operator should be seen
as a goal-directed element. Responding to complex and risky environments requires
proactive behaviour that is usually manifested in terms of “anticipation” strategies or
“planning ahead” skills (Amalberti, 1992). The implication for system design is that
information search and problem solving strategies of machine advisors should keep
pace with the planning behaviour of operators.
6/30/2018 Page 10
 Joint cognitive systems. Advanced technological systems can be seen as artificial
cognitive systems that interact with natural cognitive systems (i.e., humans). This leads
to the notion of the joint system, i.e., of two interacting systems that are described in
common terms - not as machines and not as humans, but as cognitive systems
(Hollnagel & Woods, 1983). The notion of the “joint cognitive system” emphasises the
need for effective collaboration between humans and machine advisors rather than
increased competence in isolation. It also implies that system evaluation should be done
at the level of joint performance. Therefore, operators and “intelligent” machines
should be able to communicate their perception of the system state, intentions, and
strategies, and do that in a timely fashion.

 The use of tools. In advanced control rooms, new tools are required as the whole
concept of the control room has changed. Technological developments (e.g.,
computerisation, process status overviews, and workstations) have consequences for
both the cognitive nature of work and the physical nature of operator activities. In
particular, the predominance of advanced information technology means that operators
become more dependent on specialised tools for information retrieval, control, and
communication. The ways in which operators shape their tools and the characteristics of
tools that affect individual strategies are crucial to our understanding of human
computer interaction (Woods et al., 1990).

 The context of performance. Operator performance takes place in a specific task


context that is characterised by its complexity, uncertainty and risk, but also by the
availability of tools and other forms of support (e.g., training and team work). Context
puts constraints on human performance and accounts for goal conflicts (e.g.,
production over safety), distribution of responsibilities, team priorities, and operational
norms and practices. It is only by recognising the coupling and dependency between
system design and other broader job factors - such as computerised instructions,
training, staffing, communication, and organisational support - that we will be able to
improve the overall system performance.

 Coping with complexity. A main source of complexity is the multiplicity of


information and control offered by new technologies. By introducing more flexibility to
respond to a variety of circumstances, technology may become more complex to

6/30/2018 Page 11
understand (Billings, 1991). Tool development, of course, aims at reducing the
complexity and making the work simpler and more comprehensible, but if each tool is
considered by itself, an overall increase in complexity may easily go unnoticed. We
need therefore to understand the factors that increase domain complexity and the way
that complexity affects joint performance. One way for reducing complexity, for
instance, might be the representation of technical systems in various levels of
decomposition or abstraction (Rasmussen, 1986). Another way could be the provision
of simplified constructs and models that can be made available to operators through
“help” menus or training (Billings, 1991).

 Performance variability. A difficulty in predicting operator performance is its


variability over time. Factors such as, boredom, fatigue, stress, ingenuity or creativity,
availability of support, and random variations in human capacity, are likely to cause
some variability in human performance. Errors resulting from variability of cognition
have been called “residual errors” (Hollnagel, 1993) to discriminate them from “system
induced errors”. The latter are due to interactions with the system, e.g., interface,
training, and procedures, and can be reduced to a certain degree. However, residual
errors are difficult to eliminate and there is a need to “design for error” (Norman, 1988).
This means that the interface design must prevent human errors (error resistance) and
mitigate error consequences (error tolerance).

 Performance and learning criteria. System design is typically evaluated in terms of


performance criteria, such as, number of errors, task duration, accuracy of performance,
and so on. Although these criteria measure the short-term benefits of new technologies,
long-term benefits - such as, transfer of skills to novel situations - are not captured.
Therefore, learning criteria must be included in system evaluation since performance
benefits may not always become apparent in the short term. Continuous operator
interaction with a design could potentially increase on-the-job learning of system
dynamics and transfer of skills.

These amended assumptions have implications for control room design and evaluation of
new technologies. For one thing, we are more concerned with the design of the interaction
in context rather than what is usually referred to as interface. The concept of “interacting”
or “joint” cognitive systems, for instance, implies that new technology must be

6/30/2018 Page 12
accountable for its actions and predictable in its intentions so that problems are defined
and solved in conjunction with the human operator. If operators are not able to understand
the hypotheses tested by a machine advisor and re-direct its line of reasoning in cases of
misperception, they are likely to repeat the diagnostic and fault-compensation strategies by
themselves (Woods, 1986). This means that any opportunities to build upon work carried
out by a machine advisor are missed.

Another example relates to the concept of performance as context bound.


According to this, it is essential for a design to examine the whole work system, that is, the
equipment, the training, the support tools, the crew, the social structure, and the overall
goals of the task. Analyses and remedies that look at isolated segments are likely to lead to
isolated improvements, and may thereby create problems and difficulties at the system
level. As Norman (1990) commented “too often, the implementation of some new
‘improved’ automatic system, warning signal, re-training, or procedure is really a sign of
poor overall design: had the proper system level analysis been performed, quite a different
solution might have resulted”. Taken together, interaction and context of performance
have implications for how the demands of man-machine interaction can be reduced to
allow operators more time to think, for how interaction tasks can be scheduled during low
activity periods, and for how control and pace of interaction can be delegated in
accordance with criteria that take into account the cognitive demands of emergency
situations.

By applying these amended assumptions it is possible to generate a set of design


principles for advanced technologies that are true to the conceptual underpinnings. These
control room design principles, which are described in the following section, have been
organised in relation to five specific aspects of interaction, namely: (1) user-process
interface, (2) level of automation, (3) computer support - in particular procedures, (4)
training and staffing, and (5) crew communication and task distribution. The first three
areas correspond to developments in the new interface technology while the last two areas
examine what measures need be taken to ensure that operators can meet changes in
cognitive demands imposed by new user interfaces, levels of automation, and
computerised procedures or decision aids. The guidelines start by considering each of
these aspects separately, similar to the conventional approaches to human factors
evaluation, and go on to describe the issues derived from considering the overall system.
6/30/2018 Page 13
Although the guidelines cannot be used as the only basis for the evaluation of a control
room design they will be sufficient if the intention is to consider primarily changes in
cognitive demands to operating personnel introduced by advanced interface technologies.

4. GUIDELINES FOR SPECIFIC DESIGN ASPECTS

This section examines those aspects of control room design that relate to the user-process
interface, level of automation, and computer assistance. It also considers the implications
of various design options on training, staffing, and team communications. It should be
noted that the aim is not to generate exhaustive lists of design guidelines but rather to
provide a framework that may help designers apply the principles of cognitive ergonomics
as described in the previous section.

4.1 User-Process Interface

New technology tends to force serial access to data that previously could be viewed in
parallel and to fragment data across different windows and displays. Thus, knowing where
to look next in the data space available behind the current display window and how to
extract information across multiple windows is a fundamental cognitive activity. Design
guidelines should focus on how to reduce the burden of interacting with the display system
especially at times when operators can least afford new tasks. In this way, operator
attention is not diverted away from the evolution of a critical situation to the interface per
se. Difficult-to-use interfaces may increase task complexity and may lead operators to use
the interface in ways not intended by its design or may constrain available options - e.g.,
limit the display of data to a fixed spatial organisation rather than exploit the flexibility of
the interface (Woods & Sarter, 1992). This is a typical case where functionality may be
reduced to achieve usability (Hollnagel, 1994).

The use of computerised interfaces tends to increase memory demands because


operators may need to remember where the desired data are located and how to get them
displayed in an appropriate manner. A related issue has to do with remembering past
instructions given to the process control system that have not produced any effect as yet
due to the slow response of some processes. Norman (1988) calls this phenomenon the
“conspiracy against human memory” in the design of computerised devices. Providing

6/30/2018 Page 14
reminders and prompts for action becomes therefore an importable principle in supporting
human memory in multi-task performance.

The advances of new interface technology have made it possible to present


operators with vast amounts of process information in various formats (Lind & Larsen
1995). However, some kind of selection and functional grouping of information is
necessary in order to enable operators to identify the state of the process in a timely
fashion. An interface is a representation of the technical system and, therefore, its usability
must be evaluated in terms of performance criteria (e.g., ease of use, speed and accuracy)
as well as in terms of opportunities for learning. One of the benefits of overview parameter
display systems (Woods et al., 1982) is that they can potentially increase opportunities for
understanding system dynamics and inter-relationships of parameters. These principles are
also important in the case of alarm handling. The need to make operators aware of
important alarms while they are engaged in other activities and provide easy access to a
detailed time sequence of alarms has presented designers with challenges that have not
always been met satisfactorily.

An implication of the concept of joint cognitive systems for interface design is that
the presentation and organisation of information should help operators formulate
alternative hypotheses about the causes of a situation, predict future states of the process
and select appropriate measures how to control it (De Keyser & Woods, 1990). If support
for these functions is neglected in the design of the interaction, the result may be
“cognitive tunnel vision” (Reason, 1990). Making operators aware of interesting events
that may take place in other parts of the interface and emphasising “discrimination” cues
between potentially confusing scenarios are examples of means how to prevent cognitive
lock-up.

4.2 Level Of Automation And New Technology

Increasing automation is a prominent feature of new technology to the extent that a range
of tasks, previously done by the human operator, have been taken over by machines - and
computers in particular. While many applications have been motivated by a tendency to
experiment with information technology and respond to the challenge of increasing

6/30/2018 Page 15
productivity or safety, few have been motivated by the need to increase the efficiency of
the joint system of human operators and machines.

Transparency of automation is an important design principle that could enable


operators to develop a “mental” picture of how the system functions, and anticipate
correctly what the state of the process will be after the activation of automated devices.
For instance, the response of a crew to an incident can be delayed by their effort to
examine which pieces of equipment have been rendered temporarily inoperable by the
safety system. Reminders about the state of the equipment can support task planning and
improve the response time of operating teams. On the other hand, transparency can also
support operators in anticipating conditions that could lead to activation of automated
safety systems in the near future. When operators are engaged in complex scenarios they
may not always realise that the prevailing conditions may give rise to activation of safety
systems in the short term. Information technology can be used to increase operator
alertness and readiness by providing notification to operators about safety systems that
may be activated if the situation develops in certain ways. Evaluation guidelines need to
observe how transparent automated systems are in their functioning.

The concept of performance variability, as discussed in Section 3, has implications


for the design of automation. “Designing for error” means that automated systems should
be able to both prevent human erroneous actions and mitigate their consequences
(Norman, 1988). In the context of aviation, for instance, pilots may be asked to re-affirm
complex control sequences before execution by automated systems (error resistance). In
other cases, trapping or mitigating pilot errors is achieved by automated compensation that
keeps the aircraft within the flight envelope, thus overriding erroneous pilot actions (error
tolerance)

The increasing concern with “mode errors” committed in highly automated systems
raises the issue of appropriate task feedback (Sarter & Woods, 1995). Mode errors occur
when the person believes that the system is in one state or mode when it is actually in
another (Norman, 1983). Thus, operators may initiate a course of action that is appropriate
in one mode but wrong in another one. Feedback about the current automation mode and
the transitions between modes is an important issue. However, as Norman (1990)
cautioned, the task of presenting feedback in an appropriate way is not an easy matter.
Feedback should be provided in a manner similar to what operators use when they discuss
6/30/2018 Page 16
issues among themselves in a joint problem solving activity, i.e., as by means of task
structuring rather than information structuring (Hollnagel, 1996). The amount and type of
feedback should ideally match both the interactive style of operators and the nature of the
problem.

4.3 Computerised Procedures And Intelligent Aids

The introduction of advanced technology has made it possible for operators to follow
operating procedures on CRT screens, thereby minimising the efforts needed to retrieve
voluminous hard-copy procedures. Intelligent decision aids have also been introduced that
provide instructions sensitive to the context of the current process state, perform fault
diagnosis, and offer advice on coping with critical process states. The use of computerised
procedures and intelligent decision aids is intended to assist operators in choosing well-
tried strategies and allocate tasks optimally between different team members. Problems
may, however, occur in cases where handling the interaction with these systems increases
workload, where the intentions of the procedures or decision aids are not made clear to
operators, where the data consulted by the system are not transparent to operators, and
where operators lose track of their progress in monitoring or executing procedural steps.

In order to check the reliability of diagnostic tasks undertaken by machine advisors


it is not sufficient to trace input errors only. Operators need to evaluate the advisor’s
perception of the world as otherwise they may have to repeat the diagnostic process rather
than build on the work already performed by the advisory system. In addition, operators
need facilities to re-direct the machine’s line of reasoning and develop strategies that are
complementary to those originally suggested by the systems. The lack of such facilities
may result in operators working independently instead of jointly with computerised
procedures and intelligent aids.

The concept of intentional and proactive operator behaviour has implications for
the coupling of human and machine decision styles. Amalberti (1992) has identified a
“gap” in man-machine coupling: support systems use formal logic while humans use
qualitative reasoning and heuristics. In the context of aviation, for instance, pilots may try
to increase their confidence in the automation of short-term activities in order to devote
more effort to long-term planning and to preserve their original flight plans. This may
6/30/2018 Page 17
create a potential “gap” in decision styles as operators anticipate future events whilst
machine advisors try to solve problems as they emerge. Pilots may need information and
advice on problems that have not yet emerged or that may just possibly occur in a given
flight scenario. The implication is that machine advisors should be able to anticipate
operator information needs and responses in order to keep pace with operator “planning
ahead” strategies. This is not an easy task but research in common mental models (e.g.,
Rouse et al., 1992; Salas et al., 1994) suggests a way forward to bridge the gap between
human and machine decision styles.

An important criterion for evaluating computerised aids is the type of guidance


offered to operators to choose efficient strategies or cross-check the reliability of
hypotheses and strategies suggested by the system. Checking the reliability of automated
decisions is not an easy task and may result in unacceptable delays. Operators may have to
undertake additional verification tasks - such as, what data have been consulted by the
system, how the system handles unreliable data, whether the system has perceived the state
of the process correctly, and so on. These additional tasks may increase workload in cases
of unfamiliar events due to the uncertainty of the data and the complexity of the required
strategies.

In cognitive ergonomics, computerised procedures can be viewed as a kind of


cognitive agents with their own monitoring strategies and intentions that need to be made
transparent to operators. Another issue relates to the compatibility of intentions between
operators and computer assistance, and the handling of disagreements in certain
circumstances. Inadequate handling of disagreements may be time-consuming or force
operators to constrain the use of computerised tools (e.g., explore only a small set of
facilities) or use them in ways not intended by their design (e.g., provide false cues to
defeat the system).

Finally, cognitive ergonomics also considers the organisational culture that can
affect how computerised procedures or decision aids are used. Although the use of
computer assistance is generally at the discretion of the operators, deviations from the
recommended strategy may be investigated at a later stage after the management of the
emergency. The policy of a particular organisation may raise certain expectations about the
use of such aids that is likely to affect how operators will trade-off their criteria of

6/30/2018 Page 18
accepting or rejecting machine advice. Woods et al. (1994) discuss this authority-
responsibility bind in greater detail.

4.4 Training And Staffing

The introduction of advanced technology may change the cognitive demands of operating
teams. For instance, the operator must assume the role of a process manager or supervisor
in an effort to co-ordinate a number of “cognitive agents”, such as, automated systems,
computerised procedures, decision aids, and intelligent interfaces with predictive
capabilities. Despite the lack of opportunities for practice, the operator may also be
required to retain competence in a range of tasks carried out by automation in order to
compensate for possible failures of the technology.

This situation has some clear implications for training. Firstly, operators need to
become familiar with using the new technology. Training is therefore required to help
operators master the new forms of interaction. A major issue is how to train operators to
overcome negative transfer effects that arise from well-established attitudes and strategies
developed in the context of conventional interfaces. Secondly, operators need to develop
managerial or supervisory skills to co-ordinate the performance of the machine agents.
Sheridan (1992) provided extensive discussions on the nature of support that must be
provided to supervisory controllers. One of the unresolved issues is how to provide
operators with real-time support of their performance. Machine advisors must be able to
anticipate decision requirements of operators, as if both operator and machine are
following a common script (Lockhart et al., 1993). Recent work in common mental
models (Rouse et al., 1992) suggest some useful approaches to the development of on-line
support systems.

Thirdly, operators will still be needed to assume manual control or carry out fault
diagnosis in cases where the technology fails or when unanticipated events occur.
Therefore, traditional manual control skills and fault diagnosis must be retained although
they may only be used infrequently. This is not an easy task, since complex cognitive skills
must be developed progressively as in the case of manual systems. Moreover, as Lockhart
et al. (1993) pointed out, it is necessary to move through each stage in the development
from novice to expert. Hopkin (1992) notes that there is a “large cognitive gap” between
6/30/2018 Page 19
operators who develop a solution themselves (i.e., as in manual systems) and those who
choose a solution from a set of computer-generated alternatives. Overall, operators are
required both to master new skills and to retain competence in their traditional process
control skills. It is unfortunate if system designers continue to regard operators of
advanced technology as manual operators who just happen to be in charge of more
complex equipment.

An important side effect of automation documented extensively in the ergonomics


literature is the loss of situation awareness and process control skills (Bainbridge, 1987;
Wiener, 1985; Billings, 1991). When process control tasks are automated to increase
system efficiency, lack of practice with these tasks may deprive operators of opportunities
for remaining in close contact with the process. Many manual control skills and knowledge
of process dynamics are also likely to deteriorate. There are two ways to redress the loss of
situation awareness and skills. The design solution is to adopt the principles of adaptive
task allocation so that operators are not barred from practice of a certain set of tasks. A
different approach is the training solution where refresher training is provided to help
operators maintain their skills. A combination of these approaches should be considered as
a measure of reducing the problem of skill loss.

4.5 Team Communication And Task Distribution

The introduction of advanced technology has important implications for the total amount
of communications required and the distribution of tasks to different personnel roles.
Computer-mediated communication tends to replace verbal communication. The obvious
advantage is more accurate transmission of messages, but this may also suppress or
eliminate important cues about the activities of other team members by removing the party
line effect. Tracking the process status may be difficult if it is possible for operators to
interact with the system without other operators knowing about it. This problem is
exacerbated when two experienced operators have different strategies for working with the
system; if the normal communication is lacking it may be difficult to anticipate what the
other operator is doing and when he does it. The result of these changes is loss of
awareness about the activities of other members.

6/30/2018 Page 20
The extent to which the user interface may facilitate or hinder shared cognition
across the crew is an area that needs to be addressed by the design guidelines (Salas et al.,
1994). A similar area concerns the use of computerised procedures that may make it
difficult for operators to keep track of what the other crew members are doing. Further
problems in using procedures in the context of crew dynamics relate to task delegation
among different crew members. Procedures usually make few assumptions about staffing
levels in the control room and may overload operators by specifying a number of
procedural tasks to be carried out in parallel.

Many traditional team roles - such as, assistance with task overload, development
of trust and confidence in team work, judgements of individual strengths and weaknesses -
may become more difficult for operators to fulfil if computer assistance only helps with
individual tasks. There may also be restrictions to the role of supervisor since
computerised systems can partly disguise substandard performance; an operator may thus
become inadequate or incompetent without this becoming apparent to anyone else. This
may reduce opportunities for supervisors to make judgements regarding career
developments, promotions, and re-training needs.

This kind of “reduced observability” may have an impact on the development of


norms and professional standards. As Hopkin (1995) remarks, “the development of norms
and standards relies on opportunities for (human) controllers to demonstrate their merit to
colleagues by observable actions, from which each individual’s competence and style can
be appraised”. This is a “quiet” appraisal based on discernible feedback from colleagues
and can potentially increase self-esteem in one’s own professional standards. Any form of
computerised interface that reduces the horizon of observation may change the
mechanisms by which professional standards are maintained or even remove them
altogether. This has important implications for the joint performance. Technology that
makes jobs easier may not necessarily increase task performance if it simultaneously
diminishes existing norms and standards or if operators seek to maintain ineffective norms
and standards acquired in the context of old equipment (Hopkin, 1995). In either case,
technology may encounter serious problems of acceptability among operators.

6/30/2018 Page 21
5. GUIDELINES FOR SYSTEM LEVEL FUNCTIONS

In addition to the specific functions, guidelines need to address the system level functions,
that is, the implications of the coupling or interaction between specific design features.
The most appropriate approach is to use coupled (simulation) models of the system and the
operators, to enable a systematic investigation of the dynamics of the interaction. Although
such joint system simulations are being pursued in a number of areas (e.g., Corker &
Smith, 1993; Fujita et al., 1993; Sasou et al., 1993) they are still at an early stage of
development. A much simpler approach is to map the dependencies between the level of
automation, the “hard” task features (e.g., interface design and computer support), and the
“soft” task features (e.g., training, task distribution, and communication). There are two
different approaches to map the dependencies between various design features. In the most
prevalent technology-driven approach (as shown in Figure 3), the level of automation is
mainly determined by concerns of safety and efficiency, together with technological
innovations.

Figure 3. Main factors in interface design in a technology driven approach

The level of automation has consequences for the “hard” and the “soft” task features. The
“soft” aspects of the control room are usually the ones that must provide the necessary
flexibility and adaptability to meet the constraints of the “hard” aspects. Typically, “hard”
task features are independent factors while “soft” task features are dependent factors. For
instance, it has been a common practice to consider how training could meet interface
problems but not how the design could accommodate operator strategies. The alternative is
to use a cognitive ergonomics approach which can reveal more subtle interactions. This
approach is outlined in Table 1 as a first-order matrix of relationships between the main
factors in interface design.

Table 1. Basic matrix of the links between the main factors in interface design.

The matrix in Table 1 constitutes the first layer in a multi-layered description of the links
between various design aspects that can be used as the basis for a thorough analysis and,
ultimately, the development of detailed design guidelines. Read by rows, Table 1describes
how the factor named in the first cell of each row influences the factors of each column.
For each cell (except for the diagonal) the general influence or dependency can be

6/30/2018 Page 22
described. The content of Table 1 is a first step to assess the impact of a design, at least in
terms of a general direction.

It is possible to develop this notion further in order to provide a more detailed set
of questions to guide design evaluation. Initially, this can be done by creating a second
order matrix, as shown in Table 2. The matrix describes the influence that a factor of a row
exerts on the factors of each column. The second row of the matrix, for instance, shows the
effects of the level of automation (LoA) on each of the four other design aspects. In a
discussion of these interactions that follows, the cells of Table 2 are identified first by row
and then by column (e.g., cell [row:column]) which reflects the direction of influence.

Table 2. A second order matrix of the links between the main factors in interface design

Table 2 contains four main clusters, which are expansions of the main factors shown in
Table 1. Cluster #1 shows the interaction among the three main elements of advanced
technology, namely, level of automation [A], user-process interface [I], and computerised
support tools [S]. Automation is seen as having an impact both on the user-process
interface and on computerised tools (cells [A:I] and [A:S]). The user interface, for
instance, must reflect new demands to status indication (cf. Section 4.2 on how to increase
automation transparency and overcome mode errors) while computerised procedures must
take account of any changes in diagnosis, planning or task scheduling introduced by
automated functions. On the other hand, automation is independent of user interface and
computerised support since the latter design aspects are usually changed to make
automation more acceptable and easier to use (cells [I:A] and [S:A]).

The interaction between user-process interface and computerised procedures is


reciprocal (cells [S:I] and [I:S]) since the former is a representation of the technical system
while the latter specify instructions for changing the state of the system. As discussed in
Section 4.1, the user-process interface must help operators in formulating alternative
hypotheses and make predictions; this can be achieved by examining how computerised
procedures and decision aids can be integrated with representations of the system on the
user-process interface (cell [S:I]). On the other hand, the interface by which the operator
interacts with computerised procedures should be compatible with the user-process
interface (cell [I:S]) so that overall workload in handing the two interfaces is kept to a
minimum.

6/30/2018 Page 23
Table 2 also describes two different principles that may be used to introduce new
technologies. In cluster #2 changes in training, staffing and crew dynamics are used to
meet design requirements (or even compensate for design deficiencies), corresponding to
the principles of “fitting the man to the machine”. The alternative represented by cluster #4
could be called “fitting the machine to the man”. Here design features of automation, user-
process interface, and computerised support take into account existing operator strategies
and crew dynamics. The latter approach requires tailoring the technology to established
work practices or the needs of a specific organisation; following this approach is likely to
minimise the adverse aspects of new automation, but it is rarely used in practice since “off-
the-self” technologies are cheaper to apply.

To explore the capabilities of new technology and exercise a certain degree of


control over the side-effects of automation, some changes need to be made in the training
of operators, their task allocation, and their communication practices (Bainbridge, 1987;
Hopkin, 1995). In this sense, Section 4.4 examined various training aspects that are
important to enable operators to interact with new automation, the process interface, and
any forms of computer assistance while maintaining at the same time their ability to
intervene when the technology fails (cells [A:T], [I:T] and [S:T]). Proper handling of
disagreements between operator strategies and computer assistance (see Section 4.3) is
another important factor that must be addressed in training. The discussion on the “horizon
of observation” (see Section 4.5) identified various changes in team communications and
operator supervision introduced by automation, process interfaces and computer assistance
(cells [A:C], [I:C] and [S:C]). These changes in team roles and communication pose a
serious decision dilemma between how far old practices and roles must be maintained with
the use of simulator training and how far new team roles must be adopted and trained.
Further efforts in cognitive ergonomics are needed to address this dilemma of
compatibility between existing skills and “innovation”.

With respect to cluster #4, various examples have been cited in Sections 4.3 and
4.4 on designing user interfaces and computerised tools to achieve compatibility with the
operators’ mental models of the system and their operating strategies (cells [T:I] and
[T:S]). The issue of potential “gaps” in man-machine systems due to different styles of
reasoning and decision making (Amalberti, 1992) has implications for designing
computerised tools that may anticipate operator performance (Lockhart et al., 1993).
6/30/2018 Page 24
Section 4.5 also raised the point that the design of user interfaces and procedures should
facilitate shared cognition among team members (cells [C:I] and [C:S]). In this respect,
communication aspects influence the design of user interface and procedural support.

Further discussion is needed about the flexibility of automation regarding the


degree of control between humans and machines (cells [T:A] and [C:A]). Autonomous
operation is the highest level of automation and may foreclose operator authority to
override automated decisions. Many researchers have adopted a very sceptical view on
this, favouring intermediate levels of automation, such as management by delegation and
consent (Billings, 1991), where the operator has the ultimate authority to permit execution
of tactical and strategic rules respectively. The implications of this line of thinking are that
operator strategies - as specified by training and task distribution - do affect automated
decisions. Furthermore, some degree of flexibility in automation may accommodate
differences in individual strategies and levels of operator competence. For instance, pilots
may initially prefer the option of management by consent but as experience accumulates
they may switch to management by delegation that allows a higher degree of human
control.

Finally, cluster #3 represents the reciprocal interaction between, on the one hand,
training and staffing and, on the other, crew communications and task delegation. For
instance, communication links - as affected by the introduction of advanced technologies -
may influence judgements of individual strengths and weaknesses by supervisors or team
colleagues and hence recognition of training needs (cell [C:T]). Reduced possibilities for
observation in team work are likely to affect professional norms, self-esteem, and training
needs. On the other hand, changes in staffing due to automation may have consequences
for communication and workload (cell [T:C]), especially in tasks seemingly unrelated to
introduced automated functions (Lee & Morgan, 1994).

The contents of the cells in Table 2 describe the relations between the various
design aspects in a general way. In many cases it may be possible to develop or identify
more precise descriptions of the dependencies and represent these in terms of models or
rules. It is possible to envisage further layers of the matrix containing more precise
descriptions of the dependencies between cells. If the level of detail is increased, the
matrix layer will eventually correspond to a complete application model for design

6/30/2018 Page 25
decisions. Yet even on a high level of abstraction, the formalised representation of the
interactions between the main aspects serves as a useful tool for control room design. By
using this type of approach, the system designer is constantly reminded of the fact that the
design is incomplete if it does not include a consideration of the behaviour of the joint
system. To illustrate the practicality of this approach, a sample of questions have been
specified, in the attached Appendix, in order to guide the development of design guidelines
at the conceptual level. These questions can be translated into a set of design guidelines
on the basis of a more complete application model.

6. CONCLUSIONS

The changing nature of industrial processes has affected the role of the operator. Classical
ergonomics was based on the assumption that all significant detail of the task could be
anticipated and addressed by the design, hence that the operator worked in a closed loop. It
was therefore - in principle - possible to distribute the tasks between operators and
automatic control systems and to specify information presentation and control such that an
appropriate performance would result. Cognitive ergonomics acknowledges that the
complexity of the system may defeat the design principles, and that operators therefore
also engage in open loop control. This can happen when automation fails or when
unanticipated situations occur. It can also happen because the demand to closed loop
control is reduced and the nature of work therefore induces a tendency to look ahead and
plan. Designing for open loop control is significantly different from designing for closed
loop control. In open loop the emphasis of the system design must shift from feedback and
manipulation to providing the operator with an understanding of the process and
facilitating the activation of appropriate knowledge.

The characteristics of dynamic systems mean that ergonomic decisions cannot be


based solely on static descriptions of the capabilities of humans and machines. It is
necessary to develop a framework that allows the essential aspects of the dynamics to be
captured and utilised, whether in design or evaluation. A possible way of doing this is by
means of systematically constructed influence networks or diagrams, which explicitly
define the dependencies between automated functions, operator tasks, computer assistance,
crew dynamics, and so on. Although it is reasonable to assume that the dynamics of a joint
6/30/2018 Page 26
system can only be fully captured by a representation that is in itself dynamic - such as a
joint system simulation - there are practical constraints that make such a solution
unattractive. It is nevertheless important that the concepts and procedures that are part of
cognitive ergonomics recognise the nature of human cognition - that cognition takes place
in a context and that human actions are intentional. The aim is therefore to retain the
essential features of a dynamic representation in a framework that can be used as a basis
for assessing design proposals and solutions.

The design framework advocated here is based on a structured approach that has its
origins in the disciplines of cognitive ergonomics, cognitive psychology, and Cognitive
Systems Engineering (Hollnagel & Woods, 1983). The designer is first provided with a
number of design principles or concepts, as described in Section 3, which permeate the
proposed framework. The actual design methodology is specified at a generic level in
terms of a number of qualitative layers which examine the dependencies of various design
aspects in the context of operator tasks. It is conceivable that more specific guidelines can
be produced when Table 1 and Table 2 are expanded to more detailed layers of analysis.
This approach is also useful in helping designers to structure existing guidelines into
functional groups that correspond to the various “hard” and “soft” task features and
thereby to improve usability of guidelines.

A final comment should be made about the treatment of “soft” task features in
system design. It has often been the case that training and team work have been used to
compensate for design deficiencies. In the approach advocated here, the solution is rather
to include a degree of “design adaptability” to accommodate the problem-solving
strategies of operators, their different levels of competence, and the team dynamics. This
does not mean that the capabilities of new technology should be constrained to match
existing skills and types of knowledge. It rather serves as a caution that changes in the
style of interaction with new technology and with team members is likely to impact upon
the wider organisational culture of training, supervision, team work, and professional
norms and standards. Striking a balance between adaptability and innovation remains a
challenging design issue that calls for further developments in cognitive ergonomics.

Acknowledgement.

6/30/2018 Page 27
This paper has its origin in work done in a project on “Assessment Of Changes In
Cognitive Demands On Operations Personnel”, performed for the Nuclear Installations
Inspectorate, UK. The contribution of Dr. Craig Reiersen (NII) is gratefully acknowledged.

6/30/2018 Page 28
7. REFERENCES

Algera, J. A., Koopman, P. L. & Vulbrief, H. P. (1989). Management strategies in introducing


computer-based information systems. Applied Psychology: An International Review, 38,
87-103.
Amalberti, R. (1992). Safety in process control: An operator-centred point of view. Reliability
Engineering and System Safety, 38, 99-108
Bainbridge, L. (1987). Ironies of automation. In J. Rasmussen, K. Duncan & J. Leplat (Eds.),
New technology and human error. Chichester: Wiley
Bainbridge, L. (1991a). Multiplexed VDT display systems: A framework for good practice. In
G. R. S. Weir & J. L. Alty (Eds.), Human-computer interaction and complex systems. New
York: Academic Press.
Bainbridge, L. (1991b). The “cognitive” in cognitive ergonomics. Le Travail Humain, 54(4),
322-337.
Billings, C. E. (1991). Human centered aircraft automation: A concept and guidelines. NASA
Technical Memorandum 103885.
Carter, R. J. (1992). Human factors survey of advanced instrumentation and controls
technologies in nuclear plants. Nuclear Engineering and Design, 134, 319 - 330.
Corker, K. M. & Smith, B. R. (1993). An architecture and model for cognitive engineering
simulation analysis: Application to advanced aviation automation. Proceedings of the
AIAA Computing in Aerospace Conference, October 21, 1993, San Diego, CA.
De Keyser, V. & Woods, D. D. (1990). Fixation errors: Failures to revise situation assessment
in dynamic and risky systems. In A. G. Colombo & A. S. de Bustamante (Eds.), System
reliability assessment. The Netherlands: Kluwer
Dowell, J. & Long, J. (1989). Towards a conception for an engineering discipline of human
factors. Ergonomics, 32(11), 1513-1535.
Fitts, P. M. (1951). Human engineering for an effective air navigation and traffic control
system. Washington: National Research Council.
Fujita, Y., Yanagisawa, I., Itoh, J., Yamane, N., Nakata, K. & Kubota, R. (1993). Modelling
operator with task analysis in mind. Proceedings of the ANS Top. Meeting on NPP
Instrumentation, Control and Man Machine Interface Technologies, September 21-23,
Illinois.
Hoc, J. M. & Samurcay, R. (1992). An ergonomic approach to knowledge representation.
Reliability Engineering and System Safety, 36, 217-230.
Hollnagel, E (1991). Cognitive ergonomics and the reliability of cognition Le Travail Humain,
54(4), 305-321.
Hollnagel, E. (1993). Human reliability analysis: Context and control. London: Academic
Press.
Hollnagel, E. (1994). Control room design and human reliability. Proceedings of 1st
International Conference on Tunnel Control and Communication, November 28-30, Basel,
Switzerland.

6/30/2018 Page 29
Hollnagel, E. (1996). Decision support and task nets. In S. Robertson (Ed.) Contemporary
Ergonomics 1996 (Proceedings of the Annual Conference of the Ergonomics Society, April
10-12, Leicester). London: Taylor & Francis.
Hollnagel, E. & Woods, D. D. (1983). Cognitive systems engineering: New wine in new
bottles. International Journal of Man Machine Studies, 18, 583-600.
Hopkin, V. D. (1992). Human Factors issues in air traffic control. Human Factors Bulletin,
35(6), 1-4.
Hopkin, V. D. (1995). Human factors in air traffic control. London: Taylor & Francis.
Klein, G., Kaempf, G.L., Wolf, S., Thorsden, M. & Miller T. (1997). Applying decision
requirements to user-centered design. International Journal of Human-Computer Studies,
46, 1-15.
Langlotz, C. P. & Shortliffe, E. H. (1983). Adapting a consultation system to critique user
plans. International Journal of Man Machine Studies, 19, 479-496.
Lee, J. D. & Morgan, J. (1994). Identifying clumsy automation at the macro level:
Development of a tool to estimate ship staffing requirements. Proceedings of the Human
Factors and Ergonomics Society 38th Annual Meeting, October 24-28, Nashville,
Tennessee.
Lind, M. & Larsen, M. N. (1995). Planning support and the intentionality of dynamic
environments, In J. M. Hoc, P. C. Cacciabue & E. Hollnage (eds.), Expertise and
technology: Cognition and human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Lockhart, J. M., Strub, M. H., Hawley, J. K. & Tapia, L. A. (1993). Automation and
supervisory control: A perspective on human performance, training, and performance
aiding. Proceedings of the Human Factors and Ergonomics Society 37th Annual Meeting.
October 11-15, Seattle, Washington.
Meister, D. (1985). Behavioural analysis and measurements methods. NY: Wiley.
Mossier, K. L., Palmer, E. A., & Degani, A. (1992). Electronic checklists: Implications for
decision making. Proceedings of the Human Factors Society 36th Annual Meeting. October
12-16, Georgia, Atlanta.
Norman, D. A. (1983). Design rules based on analyses of human error. Communications of the
ACM, 26 (4), 254-258
Norman, D. A. (1988). The psychology of everyday things. NY: Basic Books.
Norman, D. A. (1990). The problem with automation: Inappropriate feedback and interaction,
not “over-automation”. Phil. Trans. R Soc London, B 327, 585 - 593
O’Hara, J. M., Brown, W. S., Stubler, W. F., Wachtel, J. A. & Persensky, J. J (1995). Human-
system interface design review guideline (NUREG-0700). Washington, DC: Nuclear
Regulatory Commission.
O’Hara, J. M. & Hall, R. E. (1992). Advanced control rooms and crew performance issues:
Implications for human reliability. IEEE Transactions on Nuclear Science, 39(4), 919 -
923.

6/30/2018 Page 30
Rasmussen, J. (1986). Information processing and human machine interaction: An approach to
cognitive engineering. Amsterdam: North Holland.
Reason, J. (1990). Human error. Cambridge, UK: Cambridge University Press.
Redding, R. E & Seamster, T. L. (1994). Cognitive Task Analysis in air traffic controller and
aviation crew training. In N. Johnston, N. McDonald & R. Fuller (Eds.), Aviation
psychology in practice, Aldershot: Avebury Technical.
Rouse, W. B., Cannon-Bowers, J. A. & Salas, E. (1992). The role of mental models in team
performance in complex systems. IEEE Transactions on Systems, Man, and Cybernetics,
22, 1296-1308.
Salas, E., Stout, R. J. & Cannon-Bowers, J. A. (1994). The role of shared mental models in
developing shared situational awareness. In Gibson, R. D., Garland, D. J. & Koonce, J. M.
(Eds.), Situational awareness in complex systems. Daytona Beach, FL: Embry-Riddle
Aeronautical University Press.
Sarter, N. B & Woods, D. D. (1995). How in the world did we ever get into that mode? Mode
error and awareness in supervisory control. Human Factors, 37(1), 5-19
Sasou, K., Yoshimura, S., Takano, K., Iwai, S. & Fujimoto, J. (1993). Conceptual design of
simulation model for team behaviour. In E. Hollnagel & M. Lind (Eds.), Designing for
simplicity. Proceedings of 4th European Conference on Cognitive Science Approaches to
Process Control, August 25-27, Hillerød, Denmark.
Sheridan, T. B. (1992). Telerobotics, automation, and human supervisory control. Cambridge,
MA: MIT Press.
Wiener, E. (1985). Beyond the sterile cockpit. Human Factors, 27(1), 75-90.
Woods, D. D. (1993). Process-tracing methods for the study of cognition outside of the
experimental psychology laboratory. In G. A. Klein, J. Orasanu, R. Calderwood & C. E.
Zsambok (Eds.), Decision making in action: Models and methods, NJ: Ablex Publishing
Woods, D. D. (1986). Paradigms for intelligent decision support. In E. Hollnagel, G. Mancini
& D. D. Woods (Eds.), Intelligent Decision Support in Process Environments. Berlin:
Springer-Verlag
Woods, D. D., Wise, J. A. & Hanes, L. F. (1982). Evaluation of Safety Parameter Display
Concepts. EPRI, NP-2239, Electric Power Research Institute, Palo Alto, California.
Woods, D. D., Roth, E. M. & Bennett, K. B. (1990). Explorations in joint human-machine
cognitive systems. In S. P. Robertson, W. Zachary & J. B. Black (Eds.), Cognition,
computing, and cooperation. New Jersey: Ablex Publishing
Woods, D. D. & Sarter N. B. (1992). Evaluating the impact of new technology on human-
machine cooperation. In J. A. Wise., V. D. Hopkin & P. Stager (Eds.), Verification and
validation of complex systems: Human factors issues. Berlin: Springer-Verlag
Woods, D. D, Johannesen, L. J, Cook, RI, & Sarter, N. B. (1994). Behind human error:
Cognitive systems, computers and hindsight. Wright-Patterson AFB, OH: Crew Systems
Ergonomics Information Analysis Center.

6/30/2018 Page 31
6/30/2018 Page 32
Structured Goals of
analysis application

Constraints,
Design guidelines
Functional
(concepts, principles,
requirements
criteria)

Matching Design Design Implementation


procedures specification prototype & auditing
(or rules)

Evaluation
Technological
options

Figure 1. The main elements in a design method


Operator
Goals,
explanations,
strategies

Automatic control
system
Displays &
Controls
measurements

Process
Results or Disturbance
products or noise

Figure 2. The man-machine system in cognitive ergonomics.


“Hard” task
features
Safety needs (interface,
support)
Level
New technology of
Automation

“Soft” task
Efficiency needs features
(Task distribution,
communication)

Figure 3. Main factors in interface design in a technology driven approach


 Level of Automation “Hard” task features “Soft” task features

Level of Interface and support “Soft” task features must


Automation systems must change to change to meet the
reflect LoA. demands posed by
(LoA) changes in LoA.

“Hard” task LoA is influenced by “Soft” task features must


features technological innovation, adapt to demands posed
but not by changes to by changes in “hard” task
interface or computer features.
support.

“Soft” task LoA can be influenced by “Hard” task features can


features “soft” task features. be affected by changes to
“soft” task features.

Table 1. Basic matrix of the links between the main factors in interface design
 (A) (I) (S) (T) (C)

Level of User-plant Computer Training and Communication


automation interface support staffing task distribution

Cluster #1 Cluster #2

(A) Yes. New Yes. LoA Yes. Crew must Yes. Operator
demands to impacts have sufficient tasks are largely
Level of status indication diagnosis and knowledge and defined by the
automation & automation planning / task skills to use and LoA.
intervention. scheduling. understand
(LoA) automation.

(I) No. Interface is Yes. Interface Yes. Crew must Yes. Interface
often used to of computer have sufficient may change
User-plant increase support must knowledge and crew roles and
interface acceptability of be integrated skills to operate functions.
automation with user-plant plant.
interface.

(S) No. Computer Yes. Yes. Crew must Yes. Computer


support is often Instructions and have sufficient support may
Computer used to assist representation knowledge and change operator
support introduction of of technical skills to use roles /
automation system affect computer communication
user-plant support. needs.
interface.

(T) Yes. Flexible Yes. Interface/ Yes. Computer Yes. Staffing


automation can representation support can may have
Training and accommodate design must accommodate consequences
staffing different take account of users’ for team
strategies and users’ models strategies and communications
levels of of technical decision styles and workload
competence. system distribution.

(C) Yes. Flexible Yes. Interface Yes. Computer Yes. Team


automation can can adapt to support can communication
Communication accommodate take account of adapt to take and horizon of
and task different task preferred account of observation
distribution distributions. communication professional affect
styles standards recognition of
/practices training needs.

Cluster #4 Cluster #3

Table 2. A second order matrix of the links between the main factors in interface design
APPENDIX
A sample of questions to guide the development of design guidelines at the conceptual level

PROBLEM RECOGNITION FORMULATING AND TESTING


 Are indications sufficient to enable HYPOTHESES
operators make an accurate assessment of  Does the interface help operators to
the situation ? distinguish the current initiating event from
 Is information presented in a way that other commonly occurring events which can
minimises mental transformations (e.g., cause confusion ?
mental arithmetic) so that data are directly  Are there adequate cues to notify operators of
incorporated into the diagnostic and emerging critical alarms or information while
decision making processes ? they are searching on other parts of the
 Are alarms presented in accordance with interface ?
certain prioritisation criteria ?  Is redundant information provided in case of
 Are trends and overviews provided to possible failures of display devices ?
facilitate problem recognition ?  Are operators provided with facilities that
 Are there any facilities for checking the display relationships between critical
reliability of important displays and remote parameters ?
sensors ?

WORKLOAD IN HANDING THE MONITORING PROGRESS - FEEDBACK


INTERFACE  Are there adequate means to guard against
 Do operators have to retain large sets of unintended operator inputs ?
display data in their memory and relate  Does feedback confirm that the desired
them to information located at different actions have been completed successfully by
areas of the interface ? the system ?
 Is it difficult to remember where desired  How does the system handle feedback
data can be found in the interface space ? presented on screens which are not activated
 Are there adequate visual icons or memory at the time of the interaction ?
aids to help operators maintain awareness  Are there adequate cues to help operators keep
of the location of the inspected units within track of their progress in executing tasks ?
the overall system ?
 Are there memory aids to help operators AUTOMATION MODES
keep track of all commands given to the  Is adequate feedback provided about
system, particularly those that take long automated devices failing to activate or shut-
time to produce effects ? off particular equipment ?
 Does the interface require complex  Are there any memory aids about equipment
commands which are time consuming or that have been rendered temporarily
difficult to remember particularly at times inoperable by the automation logic ?
of high workload ?  Is access provided to the conditions under
 Does the interface add new information which automated systems may be activated ?
management tasks which may divert  Does the interface provide feedback about the
operators attention from the task of currently activated mode of automation ?
supervising the plant ?  Are there any indirect methods for activating
 Do these additional tasks tend to alternative automation modes, and is this
congregate at times when operators have to mode transition being displayed to operators ?
devote their attention to critical stages of
the event evolution ?
MANAGING THE INTERACTION WITH  Are the system functions and automation
COMPUTERISED SUPPORT TOOLS modes predictable and easy to train ?
 Is it made clear that the procedural  Is the design of new system functions likely to
recommendations are mandatory or cause negative transfer of skills due to
discretionary ? experience with the old system ?
 How are disagreements in strategies  Does the presentation and organisation of
between operators and procedural logic information provide any opportunities for
handled ? enhancing learning on the job ?
 Are explanations provided for the  Does the design of automated systems allow
diagnosis or procedural strategy proposed for adaptive task allocation to provide practice
by the system ? opportunities for skills ?
 Are instructions provided as to the  Have any inconsistencies between procedural
similarity of a diagnosed event with other logic and operator strategies been identified
potential confusing scenarios ? for training ?
 Are warnings provided about other  Have changes in system and interface design
possible strategies that may be required if been examined from the perspective of
the problem exacerbates ? operator re-qualification needs ?
 Do computerised procedures rely on
information not displayed to operators ? CREW COMMUNICATION
 Are operators being notified that the  Does the interface enable the crew to maintain
procedures are not relevant any more due communications and delegate responsibilities
to changing circumstances ? ?
 Is the procedural logic transparent to  Is it difficult to keep track of control actions
operators throughout the interaction of other operators ?
(process versus product feedback ) ?  Is it possible to re-configure the interface to
accommodate changes in staffing if required
by shift leaders in emergency scenarios ?
 Can automated systems and computerised
procedures accommodate differences in the
staffing of the control room
LEARNING AND COMPETENCE

View publication stats

S-ar putea să vă placă și