Sunteți pe pagina 1din 115

THE NATIONAL ACADEMIES PRESS

This PDF is available at http://nap.edu/25465 SHARE


   

Airport Incident Reporting Practices (2019)

DETAILS

0 pages | 8.5 x 11 | PAPERBACK


ISBN 978-0-309-48032-1 | DOI 10.17226/25465

CONTRIBUTORS

GET THIS BOOK Stephen M. Quilty; Airport Cooperative Research Program; Transportation Research
Board; National Academies of Sciences, Engineering, and Medicine

FIND RELATED TITLES

SUGGESTED CITATION

National Academies of Sciences, Engineering, and Medicine 2019. Airport


Incident Reporting Practices. Washington, DC: The National Academies Press.
https://doi.org/10.17226/25465.


Visit the National Academies Press at NAP.edu and login or register to get:

– Access to free PDF downloads of thousands of scientific reports


– 10% off the price of print titles
– Email or social media notifications of new titles related to your interests
– Special offers and discounts

Distribution, posting, or copying of this PDF is strictly prohibited without written permission of the National Academies Press.
(Request Permission) Unless otherwise indicated, all materials in this PDF are copyrighted by the National Academy of Sciences.

Copyright © National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

AIRPORT COOPERATIVE RESEARCH PROGRAM

ACRP SYNTHESIS 95
Airport Incident
Reporting Practices

A Synthesis of Airport Practice

Stephen M. Quilty
SMQ Airport Services
Abingdon, VA

Subscriber Categories
Aviation • Administration and Management • Policy

Research sponsored by the Federal Aviation Administration

2019

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

AIRPORT COOPERATIVE RESEARCH PROGRAM ACRP SYNTHESIS 95

Airports are vital national resources. They serve a key role in trans- Project 11-03, Topic S04-20
portation of people and goods and in regional, national, and interna- ISSN 1935-9187
tional commerce. They are where the nation’s aviation system connects ISBN 978-0-309-48032-1
with other modes of transportation and where federal responsibility for Library of Congress Control Number 2019938501
managing and regulating air traffic operations intersects with the role of
© 2019 National Academy of Sciences. All rights reserved.
state and local governments that own and operate most airports. Research
is necessary to solve common operating problems, to adapt appropriate
new technologies from other industries, and to introduce innovations into
the airport industry. The Airport Cooperative Research Program (ACRP) COPYRIGHT INFORMATION
serves as one of the principal means by which the airport industry can Authors herein are responsible for the authenticity of their materials and for obtaining
develop innovative near-term solutions to meet demands placed on it. written permissions from publishers or persons who own the copyright to any previously
The need for ACRP was identified in TRB Special Report 272: Airport published or copyrighted material used herein.
Research Needs: Cooperative Solutions in 2003, based on a study spon- Cooperative Research Programs (CRP) grants permission to reproduce material in this
sored by the Federal Aviation Administration (FAA). ACRP carries out publication for classroom and not-for-profit purposes. Permission is given with the
applied research on problems that are shared by airport operating agen- understanding that none of the material will be used to imply TRB, AASHTO, FAA, FHWA,
cies and not being adequately addressed by existing federal research FMCSA, FRA, FTA, Office of the Assistant Secretary for Research and Technology, PHMSA,
or TDC endorsement of a particular product, method, or practice. It is expected that those
programs. ACRP is modeled after the successful National Cooperative reproducing the material in this document for educational and not-for-profit uses will give
Highway Research Program (NCHRP) and Transit Cooperative Research appropriate acknowledgment of the source of any reprinted or reproduced material. For
Program (TCRP). ACRP undertakes research and other technical activi- other uses of the material, request permission from CRP.
ties in various airport subject areas, including design, construction, legal,
maintenance, operations, safety, policy, planning, human resources, and
administration. ACRP provides a forum where airport operators can
NOTICE
cooperatively address common operational problems.
ACRP was authorized in December 2003 as part of the Vision 100— The report was reviewed by the technical panel and accepted for publication according to
procedures established and overseen by the Transportation Research Board and approved
Century of Aviation Reauthorization Act. The primary participants in
by the National Academies of Sciences, Engineering, and Medicine.
the ACRP are (1) an independent governing board, the ACRP Oversight
Committee (AOC), appointed by the Secretary of the U.S. Department of The opinions and conclusions expressed or implied in this report are those of the
researchers who performed the research and are not necessarily those of the Transportation
Transportation with representation from airport operating agencies, other Research Board; the National Academies of Sciences, Engineering, and Medicine; or the
stakeholders, and relevant industry organizations such as the Airports program sponsors.
Council International-North America (ACI-NA), the American Associa-
The Transportation Research Board; the National Academies of Sciences, Engineering, and
tion of Airport Executives (AAAE), the National Association of State Medicine; and the sponsors of the Airport Cooperative Research Program do not endorse
Aviation Officials (NASAO), Airlines for America (A4A), and the Airport products or manufacturers. Trade or manufacturers’ names appear herein solely because
Consultants Council (ACC) as vital links to the airport community; (2) TRB they are considered essential to the object of the report.
as program manager and secretariat for the governing board; and (3) the
FAA as program sponsor. In October 2005, the FAA executed a contract
with the National Academy of Sciences formally initiating the program.
ACRP benefits from the cooperation and participation of airport
professionals, air carriers, shippers, state and local government officials,
equipment and service suppliers, other airport users, and research organi-
zations. Each of these participants has different interests and responsibili-
ties, and each is an integral part of this cooperative research effort.
Research problem statements for ACRP are solicited periodically but
may be submitted to TRB by anyone at any time. It is the responsibility
of the AOC to formulate the research program by identifying the highest
priority projects and defining funding levels and expected products.
Once selected, each ACRP project is assigned to an expert panel
appointed by TRB. Panels include experienced practitioners and
research specialists; heavy emphasis is placed on including airport
professionals, the intended users of the research products. The panels
prepare project statements (requests for proposals), select contractors,
and provide technical guidance and counsel throughout the life of the Published reports of the
project. The process for developing research problem statements and
AIRPORT COOPERATIVE RESEARCH PROGRAM
selecting research agencies has been used by TRB in managing coop-
erative research programs since 1962. As in other TRB activities, ACRP are available from
project panels serve voluntarily without compensation. Transportation Research Board
Primary emphasis is placed on disseminating ACRP results to the Business Office
500 Fifth Street, NW
intended users of the research: airport operating agencies, service pro- Washington, DC 20001
viders, and academic institutions. ACRP produces a series of research
reports for use by airport operators, local agencies, the FAA, and other and can be ordered through the Internet by going to
interested parties; industry associations may arrange for workshops, http://www.national-academies.org
training aids, field visits, webinars, and other activities to ensure that and then searching for TRB
results are implemented by airport industry practitioners. Printed in the United States of America

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

The National Academy of Sciences was established in 1863 by an Act of Congress, signed by President Lincoln, as a private, non-
governmental institution to advise the nation on issues related to science and technology. Members are elected by their peers for
outstanding contributions to research. Dr. Marcia McNutt is president.

The National Academy of Engineering was established in 1964 under the charter of the National Academy of Sciences to bring the
practices of engineering to advising the nation. Members are elected by their peers for extraordinary contributions to engineering.
Dr. C. D. Mote, Jr., is president.

The National Academy of Medicine (formerly the Institute of Medicine) was established in 1970 under the charter of the National
Academy of Sciences to advise the nation on medical and health issues. Members are elected by their peers for distinguished contributions
to medicine and health. Dr. Victor J. Dzau is president.

The three Academies work together as the National Academies of Sciences, Engineering, and Medicine to provide independent,
objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.
The National Academies also encourage education and research, recognize outstanding contributions to knowledge, and increase
public understanding in matters of science, engineering, and medicine.

Learn more about the National Academies of Sciences, Engineering, and Medicine at www.national-academies.org.

The Transportation Research Board is one of seven major programs of the National Academies of Sciences, Engineering, and Medicine.
The mission of the Transportation Research Board is to increase the benefits that transportation contributes to society by providing
leadership in transportation innovation and progress through research and information exchange, conducted within a setting that
is objective, interdisciplinary, and multimodal. The Board’s varied committees, task forces, and panels annually engage about 7,000
engineers, scientists, and other transportation researchers and practitioners from the public and private sectors and academia, all
of whom contribute their expertise in the public interest. The program is supported by state transportation departments, federal
agencies including the component administrations of the U.S. Department of Transportation, and other organizations and individuals
interested in the development of transportation.

Learn more about the Transportation Research Board at www.TRB.org.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

COOPERATIVE RESEARCH PROGRAMS

CRP STAFF FOR ACRP SYNTHESIS 95


Christopher J. Hedges, Director, Cooperative Research Programs
Lori L. Sundstrom, Deputy Director, Cooperative Research Programs
Marci A. Greenberger, Acting Manager, Airport Cooperative Research Program
Thomas Helms, Senior Program Officer
Stephanie L. Campbell, Senior Program Assistant
Eileen P. Delaney, Director of Publications
Natalie Barnes, Associate Director of Publications

ACRP PROJECT 11-03 PANEL


Joshua D. Abramson, Easterwood Airport Management, College Station, TX (Chair)
Debbie K. Alke, Montana DOT, Helena, MT (retired)
Gloria G. Bender, TransSolutions, LLC, Fort Worth, TX
David A. Byers, Quadrex Aviation, LLC, Melbourne, FL
David N. Edwards, Jr., Greenville–Spartanburg Airport District, Greer, SC
Brenda L. Enos, Burns & McDonnell, Kansas City, MO
Linda Howard, Independent Aviation Consultant, Bastrop, TX
Patrick W. Magnotta, FAA Liaison
Matthew J. Griffin, Airports Consultants Council Liaison
Liying Gu, Airports Council International–North America Liaison
Adam Williams, Aircraft Owners & Pilots Association Liaison
Christine Gerencher, TRB Liaison

TOPIC S04-20 PANEL


Scott R. Brummond, Wisconsin DOT, Madison, WI
Benjamin Goodheart, Magpie Human Safety Systems, Evergreen, CO
David Ison, Embry-Riddle Aeronautical University, Portland, OR
Edward K. McDonald, III, Port of Portland, Portland, OR
Frank Rivera, Massachusetts Port Authority, East Boston, MA
Roger Studenski, Jacksonville Aviation Authority, Jacksonville, FL
Michael Yip, Dallas Fort Worth International Airport, DFW Airport, TX
Dale A. Williams, FAA Liaison
Ashley Sng, Airports Council International–North America Liaison
Christine Gerencher, TRB Liaison

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

ABOUT THE ACRP SYNTHESIS PROGRAM


Airport administrators, engineers, and researchers often face problems for which information
already exists, either in documented form or as undocumented experience and practice. This infor-
mation may be fragmented, scattered, and unevaluated. As a consequence, full knowledge of what has
been learned about a problem may not be brought to bear on its solution. Costly research findings
may go unused, valuable experience may be overlooked, and due consideration may not be given to
recommended practices for solving or alleviating the problem.
There is information on nearly every subject of concern to the airport industry. Much of it derives
from research or from the work of practitioners faced with problems in their day-to-day work. To
provide a systematic means for assembling and evaluating such useful information and to make it
available to the entire airport community, the Airport Cooperative Research Program authorized the
Transportation Research Board to undertake a continuing project. This project, ACRP Project 11-03,
“Synthesis of Information Related to Airport Practices,” searches out and synthesizes useful knowl-
edge from all available sources and prepares concise, documented reports on specific topics. Reports
from this endeavor constitute an ACRP report series, Synthesis of Airport Practice.
This synthesis series reports on current knowledge and practice, in a compact format, without the
detailed directions usually found in handbooks or design manuals. Each report in the series provides
a compendium of the best knowledge available on those measures found to be the most successful
in resolving specific problems.

FOREWORD
By Thomas Helms
Staff Officer
Transportation Research Board

The focus of this report is on current practices for defining, collecting, aggregating, protecting, and
reporting airport organizational incident information. This study is based on information acquired
through literature review, survey results from 11 airports from a range of geographic locations and
airport classifications, and personal interviews with representatives of seven airports. Results of
the literature review and survey are presented. Case examples representing in-depth interviews are
presented in Chapter 9.
Steve M. Quilty, SMQ Airport Services, synthesized the information and wrote the report.
The members of the topic panel are acknowledged on page iv. This synthesis is an immediately
useful document that records the practices that were acceptable within the limitations of the
knowledge available at the time of its preparation. As progress in research and practice continues,
new knowledge will be added to that now at hand.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CONTENTS

1 Summary
5 Chapter 1 Introduction
5 Objective
6 Incident Reporting Overview
6 Safety-Centric Data
7 Enterprise-Centric Data
7 Mandatory Reporting
8 Voluntary Reporting
8 Using Incident Reporting
9 Synthesis Benefits
10 Audience
11 Methodology
11 Literature Review
12 Report Organization

13 Chapter 2  Terms and Definitions


13 Confusion Over the Term “Incident”
14 Defining Incident
16 Defining Near Miss/Close Call
17 Importance of Near-Miss Incident Reporting
18 Investigating Incidents

19 Chapter 3  Incidents Applied to Safety and Risk


20 Risk Management
20 Defining Risk
22 Hazards
22 Enterprise Risk Management
25 Scorecards and Dashboards

26 Chapter 4  Breadth and Depth of Incident Reporting


26 Incident Reporting as a Tool
28 Incident Reporting and Threat and Error Management

29 Chapter 5  Encouraging Incident Reporting


30 Safety Culture
31 Incident Reporting Practices and Culture
32 Culture Survey
32 Different Ways to Report Incidents

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

34 Chapter 6  Organizational Performance Indicators


34 Metrics, Measures, and Indicators
36 Safety and Key Performance Indicators
37 Reasons for Identifying Safety and Key Performance Indicators
38 Types of Safety and Key Performance Indicators
39 Leading and Lagging Indicators
40 Balance Between Leading and Lagging Indicators
42 Safety Culture as an Indicator
42 Customer Satisfaction as an Indicator
42 Training as an Indicator

43 Chapter 7  Research and Resources on Indicators and Metrics


43 Resources
44 Key Performance Indicators
45 Safety Performance Indicators
45 Leading/Lagging Indicators
45 Culture
45 Safety Management System and Safety Management Manual
46 Environmental
46 Automated People Mover
46 Health and Safety
47 Security

48 Chapter 8  Practices in Incident Reporting


48 Corroborating Incident Data
49 Overcome Barriers to Nondisclosure
49 Tracking Incident Data
50 Tracking Methods
51 Data Protection

52 Chapter 9  Case Examples


52 Large Hub: Hartsfield-Jackson Atlanta International Airport, GA
53 Large Hub: Dallas-Fort Worth International Airport, TX
55 Large Hub: Houston Airport System, TX
56 Large Hub: Port of Portland, OR
57 Large Hub: Seattle-Tacoma International Airport, WA
57 Medium Hub: Columbus Regional Airport Authority, OH
58 Small Hub: Sarasota-Bradenton Airport Authority, FL

60 Chapter 10  Findings and Conclusions


60 Findings
60 Challenges
61 Conclusions
61 Suggestions
62 Further Research

63 References
66 Acronyms and Abbreviations

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

67 Appendix A  Survey Responses


72 Appendix B  Terms and Definitions
80 Appendix C  Typical Accident and Incident Statistical Rates
82 Appendix D Example of Safety Indicators and Metrics
for 14 CFR Part 139
90 Appendix E Sample Poster and Web Display for
Confidential Incident Reporting
91 Appendix F Sample Policy Requiring Employee
Incident Reporting
95 Appendix G Sample Safety Policy Requiring Incident Reporting
97 Appendix H  Sample Employee Incident Report Form
99 Appendix I Example of Computer Incident Reporting
Data Entry Screen
101 Appendix J  Sample Safety Metrics Dashboard

Note: Photographs, figures, and tables in this report may have been converted from color to grayscale for printing.
The electronic version of the report (posted on the web at www.trb.org) retains the color versions.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

SUMMARY

Airport Incident Reporting Practices

This synthesis examines current practices for defining, collecting, aggregating, analyzing,
protecting, and reporting airport organizational incident information. It serves as an infor-
mative document for those airport operators seeking to understand the nature of airport
incident reporting and its importance for organizational learning and effectiveness, risk
management, operational safety, and worker safety.
Additional objectives of the synthesis included the following:
• Review the literature on incident reporting.
• Survey and interview airport organizations and present case examples related to incident
reporting.
• Identify incident data that can be used as benchmark measures for assessing overall
airport performance, risk, and safety analysis.
• Identify leading/lagging indicators and metrics used by airports. These indicators can
then help address the status of organizational communication, commitment, training,
and procedures as they relate to an overall safety and performance culture.
An incident reporting system can be utilized to flag or provide potential early warning
of drifts in actions toward a stated goal or an adverse event or loss. The drift in actions
or behaviors can provide either a positive, neutral, or negative indication of an airport’s
strategic and managerial direction. The use of common indicators can provide early warn-
ings of organizational or operational changes, or can help management discover weaknesses
in an airport system. Indicators allow for evaluation and monitoring of a situation, and
ultimately the correction and prevention of an incident, or the successful achievement of a
goal. It is essential to study incidents to learn how a complex airport actually works.
When discussing incident reporting, reference is made to safety, hazards, indicators,
performance, enterprise risk management, culture, climate, and other related terms.
However, there does not exist universal agreement as to what constitutes an incident. For
this reason, the synthesis took a broad approach to incident reporting in organizations. It
views incident reporting as a means to improve airport organizations through the analysis
of data. With data, better-informed and higher quality decision-making can be exercised.
The challenge for airport organizations is to identify what data to collect and to pay
attention to, given the numerous types of incident data that could be analyzed. A second
challenge is how to utilize the incident data in either a proactive or preventive manner. Case
examples are presented to illustrate how incident reporting is being conducted at several
airports. The accurate reporting of incidents can be influenced by a number of factors. Two
such factors are an organization’s safety culture and the actions of supervisors who enforce
reporting requirements.

1  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

2   Airport Incident Reporting Practices

This synthesis helps airport operators gain perspective on the use of incident reporting
systems and terms, the importance of incident reporting in an enterprise risk management
program, and how an airport’s safety culture can affect incident reporting. Information
provided and resources identified in this synthesis can further assist airport operators to
• Establish and document organizational and safety reporting practices.
• Identify reporting mechanisms, indicators, metrics, dashboards, and benchmarks
used in safety and risk management analyses and in organizational performance and
evaluation.
• Understand the role of incident reporting in enterprise risk management practices.
• Strengthen workplace safety policies and practices.
• Gain intelligence about daily operations through the use of leading and lagging indicators.

Often, the depiction of an iceberg helps symbolize the nature of incident and accident
potential. The visible above-water portion of an iceberg characterizes incidents, accidents,
and risks that are known. The larger below-water portion of an iceberg characterizes incidents
that are unknown or that go unreported. One role of airport management is to constantly
determine the number, size, and movement of metaphorical icebergs that exist at their
airport.
Differences in the definitions and meanings of the term “incident” can make it difficult
to compare data (and hence metaphorical icebergs) between airports if incidents are not
standardized and measuring the same things. Airports seeking to make comparisons to
other airport organizations will need to use caution and ensure that measures being compared
are similar.
This synthesis defines incident as something that is out of the ordinary. This definition
allows for the capturing of data that can be positive, neutral, and/or negative; and can be
centric to any topic of choice, that is, safety, business, risk, information, security, information
technology (IT), and so on. The term “incident” then takes on a plain data connotation that,
once analyzed, can be used as a window for viewing the operation of an airport, no matter the
element being analyzed. Collecting incident data allows an organization to better understand
what makes it successful, and not just something that leads to loss, damage, or injury. In both
safety-centric and enterprise-centric incident reporting, the goal is to obtain knowledge, and
in particular, collective knowledge about the organization and its environment.
Incident reporting carries different connotations depending on the experience and per-
spectives of the user. The definition of “incident” used for this synthesis has two primary
meanings that can be categorized as safety-centric and enterprise-centric. The more common
understanding of an incident is safety-centric. It refers to an event, activity, occurrence,
opportunity, or similar happening that could or does affect the safety of personnel, equip-
ment, or operations at an airport. Current practice at airports tends to capture primarily
safety-centric data. The survey found that the other meaning of incident, enterprise-centric,
is gaining momentum and importance. It is related to business and organizational risk and
performance. It relates to an event, occurrence, activity, opportunity, or similar happening
that can affect particular organizational goals or performance pathways.
Incident reporting systems have basically two functions:
1. They identify factors and conditions that can influence organizational performance.
2. They can enable proactive and predictive safety strategies that allow for better management
and achievement of safety, business, or strategic goals.
This synthesis complements and substantiates the information contained in ACRP
Synthesis 58: Safety Reporting Systems at Airports (Landry 2014). While ACRP Synthesis 58

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Summary  3  

focused on 14 CFR Part 139, Certification of Airports, this current synthesis encompasses
the whole airport and involves a risk management approach. The focus is more on inci-
dent collection and reporting related to near misses and observable events, rather than
to accident reporting. It ties incident reporting to risk management, metrics, safety and
performance indicators, and organizational performance.
General characteristics and best practices of incident reporting systems culled from the
literature search, survey, and interviews include the following key attributes:
• The organization supports and encourages a culture of reporting hazards and incidents.
• Reporting is made easy and received from a broad range of sources.
• Avenues exist for employees, tenants, users, and the general public to participate in the
reporting process.
• The reporting system is non-punitive and protects the privacy of those who make
reports.
• A structured process is in place for reviewing and investigating incidents, identifying root
causes and the weaknesses in the system, and developing action and implementation
plans.
• Feedback is provided to the person making the report, if the report is not anonymous.
• Investigative results are used to improve safety systems, hazard control, risk reduction,
and lessons learned through training and continuous improvement.
• Information or summaries of investigations are disseminated to stakeholders in a timely
manner, as part of the feedback and culture process.
The chapters and appendices contain examples of safety and key performance indica-
tors, leading and lagging indicators, and various forms and case examples of airports using
incident reporting processes. A common conclusion found in the literature search is that
a balance is needed between lagging and leading measures when selecting performance
measures.
Eleven airports participated in the study, primarily large and medium hub airports.
Underrepresented in the survey are responses from small-hub, non-hub, and general avia-
tion (GA) airports, as airport managers in those categories did not express a willingness
to participate in the study. It is recognized that the subject matter, including discussion of
enterprise risk management and having formal incident reporting systems in place, may
have contributed to the poor response. It is possible that airports in the underrepresented
categories do not have well-developed incident reporting systems, or do not have any
systems in place at all. However, small airports can benefit from the information provided
as they can model the activities of the larger airports.
The following are key findings from the study:
• Few airports have formal incident reporting systems that capture incident data other than
those related to the need for regulatory compliance.
• Incident reporting, data collection, and analysis processes are not fully developed at
airports, especially voluntary reporting capabilities.
• Airports with better-developed incident reporting practices tend to have a champion
within the organization who supports and shepherds the processes.
• The breadth and depth of an incident reporting system is determined by each airport,
based on resources available and commitment of management to establish a culture of
safety.
• There is variation among airports as to which person or department has responsibility for
incident data collection, reviewing, analyzing, and reporting.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

4   Airport Incident Reporting Practices

• Safety and key performance indicators, dashboards, and scorecards are used by only a
few airports.
• Safety and key performance indicators, leading indicators, and hazard identification are
not well understood at a majority of airports, indicating a need for training and education
in those areas.
• It is difficult to benchmark or compare safety and key performance indicators among
airports, as they reflect the individual goals of an airport, and those goals can vary widely
given the type, size, and nature of an airport.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 1

Introduction

Objective
This synthesis examines current practices for defining, collecting, aggregating, analyzing,
protecting, and reporting airport organizational incident information. However, there is not
universal agreement as to what constitutes an incident. The traditional purpose for collecting
incident data is so an airport operator can monitor existing operations, identify current hazards
and risks, forecast future possibilities, identify and understand safety trends, improve operational
and functional tasks, and monitor progress toward various goals of the airport. Another purpose
though is to gain knowledge and understanding of how the organization functions.
When discussing incident reporting, one’s frame of reference may include concepts of
safety, hazards, risk indicators, performance indicators, enterprise risk management, culture,
climate, and other related terms. While incident reporting is often viewed as having negative
safety connotations, more recent viewpoints consider incidents as having positive connotations.
For this reason, a broad approach to incident reporting in organizations is assumed in this
synthesis. Collecting incident data allows an organization to better understand what makes it
successful, in addition to identifying events that can lead to damage, loss, or injury.
This synthesis is intended to serve as an informative document for those airport operators
seeking information about establishing processes that address concerns of operational safety,
worker safety, risk management, and organizational effectiveness. It helps identify different
types of incidents that can occur on airports, how the incidents are reported, how the data are
tracked, and how the data are used. The full airport environment is considered in this synthesis,
to include incident collection from the airfield, terminal, landside, and tenant operations.
The chapters and appendices further contain examples of safety and key performance indica-
tors, leading and lagging indicators, case examples of airports using incident reporting processes,
and sample forms used by airports.
Confusion may exist as to whether this report, in using the term “incident reporting,” might
not also be describing a safety reporting, hazard reporting, or threat and error management
(TEM) system. There is commonality among the uses. This report assumes a broad interpretation
to the term “incident reporting” to include safety reporting, hazard reporting, and TEM. It also
includes incidents related to the achievement of organizational goals and to events exceeding
acceptable organizational risk tolerances. It further views incidents as having positive, neutral,
and/or negative aspects.
The following additional objectives were sought:
• Review and synthesize the literature on incident reporting.
• Survey and interview airport organizations and present case examples of incident reporting.

5  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

6   Airport Incident Reporting Practices

• Identify incident data that can be used as benchmark measures for assessing overall airport
performance, risk, and safety analysis.
• Identify leading/lagging indicators and metrics used by airports. These indicators would
then help address the status of organizational communication, commitment, training, and
procedures as they relate to an overall safety and performance culture.
• Help identify what information can be used to more completely understand normal
operations.
An earlier ACRP synthesis investigated safety reporting methods and systems for airports
certificated under Title 14 Code of Federal Regulations (CFR) Part 139 (14 CFR Part 139) by
assessing practices, processes, and systems used to collect and analyze safety data and infor-
mation. Specifically, ACRP Synthesis 58: Safety Reporting Systems at Airports (Landry 2014)
considered two aspects of safety data reporting by Part 139 airport operators: mandatory
(required as part of regulatory compliance or management oversight) and voluntary, such as
safety committees, safety groups, or as part of a safety management system (SMS) proposed for
Part 139 airports.
ACRP Synthesis 58 included research on large, medium, small, non-hub, GA, and joint
civilian/military use airports at various locations throughout the United States. As it applies to
the different-sized airports, the study breaks out the results for all airport operators interested
in collecting, analyzing, and reporting safety data.
This current synthesis complements and substantiates the information contained in ACRP
Synthesis 58. Readers of this synthesis are encouraged to review ACRP Synthesis 58 for more
detailed information about types of mandatory and voluntary reporting that occurs on airports,
methods used to collect and analyze information, software systems available, data use and
sharing with external entities, and legal concerns about data sharing.
This current synthesis differs from ACRP Synthesis 58 in its target of investigation and in
its explanation of incident reporting. This synthesis’s focus is more on incident reporting as it
relates to near misses and observable events, rather than to accident reporting. It ties incident
reporting to risk management, indicators, and to organizational performance. Lastly, it looks at
incident reporting as it affects, or could affect, the whole airport and not just the airfield.

Incident Reporting Overview


Ask any airport executives or employees at an airport what their top goal is and the more
likely response is “Safety!” The goal of safety is routinely emphasized in the aviation and airport
industry and stems from the regulatory bodies and from public expectations. The mission of
the FAA is to provide the safest, most efficient aerospace system in the world. For this reason,
incident reporting tends to take on a safety-centric approach. It is an important element of
achieving a safe operation. But recent emphasis on incident reporting from the risk manage-
ment perspective emphasizes a more enterprise-centric approach to use of the term, as applied
to all areas of the organization, not just safety. Both approaches are presented throughout this
synthesis.

Safety-Centric Data
Whereas a universal goal of airports across the continent is safety in their operations and
activities, it is common for an airport organization to collect information that helps identify
how safe its airport may or may not be. The information collected will vary among airports.
The ability to reduce the impact of health, safety, and environmental issues within an airport

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Introduction  7  

organization can rest on how well an airport can track, manage, understand, and use information
provided by an incident reporting system.
Steps taken by organizations to ensure safety often begin with the desire to prevent accidents
and incidents from occurring. An airport organization will put into place any number of differ-
ent processes and systems to prevent accidents and injuries. A primary question to ask is, How
can weaknesses in the processes and systems be proactively addressed (rather than reactively)
to maintain a safe environment or to achieve specific performance goals? When developing
measures that might be used in an incident reporting system, another question to ask is, “What
is required from the airport organization in order to be aware of its safety level and enhance its
safety performance?” (Reiman and Pietikäinen 2010, p. 32).

Enterprise-Centric Data
Organizations routinely seek to collect information that provides an indication of whether
specific goals or program objectives are on target within the organization. Beyond safety goals,
airport organizations develop strategic, business, financial, operational, marketing, environmental,
and numerous other goals and plans. Meeting an established goal requires an organization to
monitor and measure progress toward meeting those goals. Leading and lagging indicators can
be used for these purposes. However, the use of any indicator requires the collection of basic
data. Most airports collect data, but not necessarily in a formalized and all-encompassing manner.
An incident reporting system is one formalized mechanism for collecting data.
An example of an incident related to the attainment of goals could be a miscommunication
of sorts that leads to a task not being completed, whether as a result of different languages, a
language deficiency, use of cultural references, word choices, or something similar. An incident
could also be where excellent communication was exercised to exceed the task requirements.
Both types of incident reporting and data collection are important to obtain.

Mandatory Reporting
To obtain data and learn from accidents, regulatory requirements exist for airports to collect
data on accidents, injuries, and other major incidents. Among these are the federal requirements
of the NTSB, the Occupational Safety and Health Administration (OSHA), and the FAA. For
example, regulations an airport must consider are 14 CFR Part 139, The Certification of Airports;
40 CFR Part 112, Oil Pollution Prevention; and 29 CFR 1910, Occupational Safety and Health
Standards. Other state or local requirements may be required to support legal obligations, such
as local law enforcement or medical reporting. Various state or local laws may also require
reporting of accidents or incidents (i.e., police and insurance matters).
Laws and regulations can require mandatory reporting of identified conditions, events, or
situations. The information reported generally reflects an accident or major incident that has
already occurred. Even with mandatory reporting requirements, of concern needs to be the
number of incidents that are not reported, as they can be an indication of an opportunity lost
to learn how to improve the organization. In one study of the rail industry data between 2005
and 2010, it was estimated that 500 to 600 reportable accidents were not reported (Safety and
Standards Board 2011). Those numbers do not include the number of incidents that did not
merit a mandatory report, according to the authors. Those were estimated to be three times as
many by the same study. The studies suggest the depiction of an iceberg as a symbol of the nature
of incident and accident potential. The visible above-water portion of an iceberg characterizes
incidents, accidents, and risks that are known. The larger below-water portion of an iceberg
characterizes incidents that are unknown or that go unreported.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

8   Airport Incident Reporting Practices

Voluntary Reporting
It is the intentional seeking, or the voluntary acquisition of, precursor incident data that can
prove most useful in preventing future accidents or in predicting future direction and risks.
Every day at airports, incidents occur that narrowly avoid becoming accidents or causing serious
injury, or that indicate the airport may be drifting from its stated goals. When these events or
occurrences occur, they can be indicative of a gap in operational or performance capabilities.
Often, these little incidents go unreported. But an airport organization cannot prevent a future
accident or injury, or stay on course, if it does not know the possibilities that exist. In a workshop
paper prepared for the National Academy of Engineering Program Office Accident Precursors
Project, presenter Christopher Hart states,
Most industries have begun to consider the feasibility of collecting and analyzing information about
precursors before they result in mishaps. Too often, the “hands-on” people on the “front lines” note, after a
mishap, that, they “all knew about that problem.” The challenge is to collect the information “we all know
about” and do something about it before it results in a mishap. (Hart 2004, p. 149).

Using Incident Reporting


One of the primary purposes for identifying hazards, reporting of incidents, and the assess-
ment of risks is to determine whether enough has been done to prevent an incident or accident
that may lead to fatalities, injuries, ill health, and/or damage to aircraft (Civil Aviation Authority
2006). Another primary purpose, based on survey response to this report, is to determine
whether a stated organizational goal is on track.
An incident reporting system can be utilized to flag or provide early warning of potential drifts
in actions toward a stated goal or an adverse event or loss. A drift in actions or behaviors can
be in either a positive, neutral, or negative direction. Indicators can be used to provide early
warnings of changes, or to discover weaknesses in an airport system. Indicators allow for
evaluation and monitoring of a situation, and ultimately the correction and prevention of an
incident or the achievement of a goal.
While viewing some indicators as a precursor to a future negative incident (often referred to as
a lagging indicator), another view is to measure signs of changing vulnerabilities through the use
of leading indicators (Kjellén 2009, p. 486). Understanding how vulnerabilities are manifested
within an organization can heighten awareness to a possible incident precursor that previously
was unknown. For this reason, recognizing incident precursors through root cause analyses or
similar means can allow for corrective action that enhances goal attainment and safety.
The indictors an airport organization chooses to monitor as part of an incident reporting
system will be influenced by the need for risk control and for safety development. The Organisa-
tion for Economic Co-operation and Development (OECD) views leading or activity indicators
as being designed to help identify whether enterprises/organizations are taking actions believed
necessary to lower risks (OECD 2008).
The literature on incident management notes that analyses of near misses, close calls, incidents,
mishaps, events, threats, errors, and similar occurrences are important for identifying future
major incidents and accidents. It is essential to study incidents to learn how a complex airport
actually works. The investigation of incidents and the determination of root causes can be
effective in identifying and understanding the unforeseen and complex interactions that result
in unsafe conditions or accidents, provided proper attention is paid to any lessons that can
be learned.
One key method for understanding hazard and risk possibilities at an airport is through
use of a safety management system (SMS). An SMS contains policies, objectives, procedures,

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Introduction  9  

methods, roles, and functions for the purpose of controlling hazards and risks. Incident reporting
is a fundamental component of an SMS. It includes the identification, reporting, and manage-
ment of hazards and incidents that routinely exist or that can occur at an airport. An SMS helps
an organization better understand and manage accident precursors, or the events leading up
to an accident. Seven of the airports in this synthesis have an SMS program in place. None
were fully implemented to date. However, three of the seven airports in the survey did indicate
they could closely meet a requirement for implementation within the first 6 months of 2018,
if required.
A second key method for understanding the hazard and risk possibilities is to develop an
organizational-wide enterprise risk management (ERM) program. A risk management program
raises the level of awareness and understanding about hazards, risks, and opportunities into
areas beyond those that would be covered by a Part 139 SMS. It would look at enterprise-wide
risk and incident exposure within the other departments, facilities, operations, and strategic
initiatives of the organization.
A third key method for understanding the hazard and risk possibilities is to routinely conduct
root cause analyses, whether as part of an SMS safety assessment (SA) or as part of an ERM
assessment. Root cause analyses require the collection of incident data to better assess the
probabilities of an event occurring or for viewing obstacles that can impede goal attainment.

Synthesis Benefits
From a safety-centric perspective, incident reporting helps identify areas for improvement in
the safety of airport operations. This is accomplished through the timely detection of operational
hazards and system deficiencies, learning from the investigation and analyses of those incidents,
and the resultant improvement of the organization though training and safety implementa-
tion. Accidents are prevented or organizational risks are reduced as a result of prompt analysis
of collected safety and risk data, remedial mitigation actions taken, and the exchange of safety
information both internal and external to the airport organization.
From an enterprise-centric perspective, incident reporting helps airports gain access to
operational intelligence throughout the organization. Accidents are prevented or organizational
risks are reduced as a result of more effective understanding of where and how risk is created in
the organizational system, and by supporting more intelligent allocation of resources to respond
to risk in near real time, with consideration from varied user perspectives, and with knowledge
of how work is actually done (B. Goodheart, personal communication, November 28, 2017).
Information provided in this synthesis can assist airport operators in establishing and docu-
menting organizational and safety reporting practices; identifying reporting indicators, metrics,
dashboards, and benchmarks used in safety and risk management analyses and in organizational
performance and evaluation; and strengthening workplace safety policies and practices.
Incident reporting systems have basically two functions:
1. They identify factors and conditions that can influence organizational performance.
2. They can enable proactive and predictive safety strategies that allow for better management
and achievement of safety, business, or strategic goals.
Macrae (2016), who has researched airline incident reporting, also investigated incident
reporting in the healthcare industry and makes the following statement on the function of an
incident reporting system:
The core functions of an incident reporting system are twofold. One is to use incidents to identify and
prioritise which aspects of a healthcare system and its underlying risks need to be examined more closely.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

10   Airport Incident Reporting Practices

The other is to organise broader investigation and improvement activities to understand and address those
risks. These active processes of investigation, inquiry and improvement underpin learning. (p. 74)

As compiled by the principal investigator, the research literature identifies a number of goals
or purposes for the reporting and investigation of incidents:
• Identify hazards and/or highlight potential systems weaknesses.
• Analyze the reported information to identify safety or organizational risk.
• Proactively find ways to prevent an accident or injury at some future time.
• Bring to light valuable information that may not otherwise be discovered.
• Confirm any safety efforts taken.
• Determine costs associated with an event.
• Fulfill legal requirements.
• Determine compliance with applicable regulations.
• Process worker compensation claims.
• Engage the workforce, users, and other stakeholders in solving problems.
• Develop positive attitudes and a culture of safety.
As reported by Simmons (2015), effective incident reporting can have significant benefits:
• Improvements in business performance and operational capability,
• Protection of people,
• Protection of brand and reputation,
• Reduction in risk to the organization/operation,
• Improved competitive and strategic advantage, and
• Improved efficiency through effective integration of business and safety management
systems.
In addition to operational benefits, the research literature shows financial and economic
benefits to be realized from implementing an incident reporting system. Having accidents and
major incidents was found to negatively affect the financial results of an aviation organization.
As shareholder and market value are lost, losses are absorbed as part of the regular costs of doing
business, and individual departments or individual SMS interventions incur costs and/or savings
associated with safety (Lercel et al. 2011). Lercel’s return-on-investment model illustrates the
business benefits of safety programs, such as an SMS, by using a macro-to-micro analysis. It
characterizes SMS in terms of an investment portfolio that consists of multiple safety programs
with varying rates of return, risk, and maturity terms. The author’s premise was that it is a better
use of aviation company funds to invest in SMS programs that will prevent accidents than to
forego SMS and absorb the financial impact of accidents that could have been avoided.
Maslen and Hayes (2016) suggest another important purpose for incident reporting systems—
to obtain knowledge, and in particular, collective knowledge about the organization and its
environment. They suggest in their research that the question isn’t so much what needs to be
reported or learned from an incident. Instead they suggest, “What do people need to know to
play their part in major accident prevention? And how is that knowledge effectively shared?”

Audience
The principal audiences for this synthesis study are the following:
1. Airport directors and executives, including finance and legal departments.
2. Airport operational and divisional managers.
3. Risk and safety management professionals.
4. Human resource and personnel directors.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Introduction  11  

5. Internal and external compliance auditors.


6. Airport consultants and independent practitioners.
7. Fixed or specialized aeronautical service operators operating on airports.

Methodology
Finding airport organizations with incident reporting systems other than those that are
mandatory or regulatory was a challenge for this synthesis. The difficulty affected the study
methodology. A three-step process was used:
1. A general mail inquiry was made to 105 selected airports of all sizes about whether the airport
organization had a formal incident reporting system, and collects key performance indicator
(KPI), safety performance indicator (SPI), or other similar data. A self-addressed return
postcard was provided in letters mailed to 105 airports invited to participate. No response was
received from 77 airports.
2. Airports that responded affirmatively to the inquiry were asked to formally participate in the
study. Of the 28 airports that responded, 11 self-selected to further participate in the study
(Table 1). A survey questionnaire was developed and pretested with three airports. A final
survey was mailed to the 11 airports. Appendix A contains the survey and the responses to it.
Throughout the report, a particular survey question is referenced by the use of brackets and
italics: [Question].
3. Based on the 11 survey responses, seven were selected for more in-depth analyses of their
practices. Personal interviews were conducted with respondents. Their case examples are
presented in Chapter 9.
Underrepresented in the survey are responses from small, non-hub, and GA airports; no
airport in those categories except one expressed a willingness to participate in the study. It is
recognized that the subject matter, including discussion of ERM and having formal incident
reporting systems in place, may have contributed to the poor response. It was surmised that
airports in the underrepresented categories may not have well-developed reporting systems, or
any such systems, in place.

Literature Review
Incident reporting is a broad topic. Subsequently, the literature review resulted in a plethora
of information to be analyzed. The principal investigator reviewed literature using standard
web-based search engines such as Google, Google Scholar, ProQuest, Transportation Research

Table 1.   Participating airports and hub designation.

LH Atlanta Hartsfield-Jackson International Airport, GA


LH Massachusetts Port Authority – Boston International Airport, MA
LH Dallas-Fort Worth International Airport, TX
LH Houston Airport System, TX
LH Metropolitan Washington Airport Authority – Dulles International, VA
LH Metropolitan Washington Airport Authority – Ronald Reagan Airport, DC
LH Portland International Airport, OR
LH Seattle Tacoma International Airport, WA
MH Columbus Regional Airport Authority, OH
MH Jacksonville Aviation Authority, FL
SH Sarasota-Bradenton Airport Authority, FL

LH = large hub MH = medium hub SH = small hub

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

12   Airport Incident Reporting Practices

Information Services, and other academic and professional databases. Search terms used included
incident reporting, leading and lagging indicators, metrics, safety and key performance indicators,
near miss, enterprise risk management, dashboard, scorecard, organizational culture, and safety
management system.
Literature citations are included throughout the report in their appropriate and meaningful
chapters and sections. Chapter 7 contains focused information related to research and resources
on indicators and metrics.

Report Organization
Chapter 1 (this chapter) introduces the reader to incident occurrence and reporting at airports
by providing an overview, explanation of benefits, the intended audience, the methodology, and
how the literature review was undertaken.
Chapter 2 seeks to explain the importance and difficulty in defining an incident, near miss,
and other terms.
Chapter 3 further lays a foundation for understanding incident reporting in relation to risk
management and organization performance. Incident management is explained through
exposure to risks and hazards. Also explained is the role of incident reporting in helping an
ERM system make the connection between individual departmental or program initiatives and
the accomplishment of strategic organizational goals.
Chapter 4 explains how incident reporting can be viewed as a tool that is used to look nega-
tively for potential problems within an organization, or positively to catch emerging risks and
to ensure work efforts stay on track within boundaries or margins for safety and performance.
Chapter 5 introduces safety management systems and the need for an organizational culture
to exist that will promote the successful implementation of an incident reporting system.
Chapter 6 introduces the relationship between incident reporting and various performance
indicators and metrics, with emphasis on leading and lagging indicators. The distinction between
safety and key performance indicators is explained and applied to airports. Outlined are some
of the challenges an airport organization may encounter when selecting which incident factors
are important to monitor.
Chapter 7 summarizes previous research related to incident reporting systems and its
application to airports. Included are sources of information and resources related to indicators
and other metrics that can be used.
Chapter 8 presents practical information from the study and the literature search on how
incident data are collected and tracked.
Chapter 9 provides summaries and case examples of airports selected for interview as part of
the study.
Chapter 10 provides conclusions and suggestions for further research.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 2

Terms and Definitions

Prevalent among research studies found in the literature was the lack of agreement on the
terms used for, and related to, incident reporting. In most cases, the term is safety-centric in
its meaning. However, a more recent and generalized view of the term can be applied to other
business and enterprise aspects as well. As defined in this synthesis, an incident can be anything
out of the ordinary.
Different meanings for the term “incident” were found to depend on the context in which it
was used, or on the historical use of the term within an organization or the system. For example,
the term “incident” is often used in the same context as the term “accident.” Or the definition
specifically applies to an aircraft incident, a security incident, an emergency incident, a ground
vehicle incident, wildlife incident, information technology (IT) incident, or similar topics, and
it had little applicability to non-included areas of the organization.

Confusion Over the Term “Incident”


The confusion that exists about what constitutes an accident or incident can be observed
in the use of terms within regulations required of airport operators. For instance, the airport
security requirement of the TSA, 14 CFR 1542.307, is titled Incident Management and requires an
evaluation of incident threats. The incidents referred to are those associated with bomb threats,
threats of sabotage, aircraft piracy, and other unlawful interference to civil aviation operations.
The confusion arises when one interprets an incident to be the same as one of the identified
illegal acts, rather than identifying an incident as the threat or a precursor event that leads to
the illegal act.
The same can be said about the airport emergency plan required under 14 CFR Part 139.325.
The regulation requires that an emergency plan be developed that addresses aircraft incidents
and accidents; bomb incidents; structural fires; fires at fuel farms or fuel storage areas; natural
disasters; hazardous materials/dangerous goods incidents; sabotage, hijack incidents, and other
unlawful interference with operations; failure of power for movement area lighting; and water
rescue situations. In each case, the term “incident” can be interpreted to be the major adverse
event itself, rather than the precursor incidents leading up to the major event.
The incorporation of the National Incident Management System (NIMS) into airport
operations requires the establishment of an Incident Command System (ICS). The ICS is
activated whenever a major accident occurs. In this case, an incident is synonymous with an
accident.
Given how the term “incident” is used in the context of regulations, it is easy for one to think
of an incident as a major adverse event rather than a non-harmful near miss, minor occurrence,

13  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

14   Airport Incident Reporting Practices

other situation that precedes the major event, or as this synthesis supports, an incident as
something out of the ordinary and affecting all aspects of airport performance, both positively
and negatively.
Due to different uses of the term, it can be difficult to make comparisons of incident data
between airports or other industries. The research suggests that an airport clearly define terms
used in its safety and incident reporting programs to better ensure identification, investigation,
and analysis. Another reason for defining terms is so an airport can better benchmark its data
against other airports. Illustrative of this is the following taken from the FAA Advisory Circular
(AC) on SMS: “The term hazard is often misused, so it is important that the airport’s training
program for those individuals conducting the 5-step process . . . clearly defines and provides
examples of hazards” (AC 150/5200-37A 2016, p. 21). A better standardization of terms within
the airport industry is one suggestion of this synthesis.
To help airport operators understand the variability of terms, Appendix B contains a lexicon
of terms and definitions that an airport organization can consider for use. For example, the terms
“safety,” “risk,” and “incident” have a number of different definitions. Within this synthesis,
there are preferred definitions used, and they are stated in their respective sections.

Defining Incident
Perspectives of what constitutes an acceptable level of safety can differ significantly among
individuals and airport organizations. So, too, the word “incident” can mean different things
to different people, depending on their frame of reference and experience. This is supported
in the research. Other terms such as “human factors,” “safety management,” “accident,” or
“safety culture” also have different meanings, definitions, and usages within the practitioner and
research communities (Reiman and Rollenhagen 2011). For this reason, the literature suggests
that it is important for each organization to define what an incident is. Doing so helps reduce
the subjectivity of interpretation by employees and allows for better benchmarking across
airports.
Examples of the various terms used in the literature to help describe or define an incident are
shown in Table 2.
The various definitions of the term “incident” found in the research literature can be grouped
by four different meanings:
1. Interchangeable with the term “accident” and denoting a major or serious outcome has
occurred.

Table 2.   Synonyms for the term “incident” used


in the literature.

• Accident pathogen • Disturbance • Near loss


• Accident precursor • Event • Near miss
• Adverse event • Injury-free event • Occurrence
• Alarm • Latent error • Potential accident
• Circumstance • Mishap • System deficiency
• Close shave • Narrow escape • Trigger
• Close call • Near accident • Undesired circumstance
• Contributing factor • Near collision • Warning signs
• Near hit

Source: Consolidated from the literature search.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Terms and Definitions   15  

2. Implies harm—that an injury, damage, or loss has occurred, but it is minor, distinct from,
and does not rise to the level of an accident.
3. Implies no harm—that an event resulted in no injury, damage, or loss, but could have
under different circumstances. It is a situation that could have led to loss of, or disruption to,
an organization’s operations, services, or functions.
4. A positive behavior, situation, event, or communication that indicates or reinforces safety
or the proper pursuit of risk processes and organizational goals.
The first meaning is often found in daily communications when people attempt to describe
an adverse or negative event. They do not clearly distinguish between an accident and an incident.
Emergency response and security programs may use this interpretation, where an incident
triggers a response no matter the consequence or degree of outcome.
An example of the second meaning is found in the FAA glossary (FAA Order 8040.4B 2017).
The FAA defines an incident as an occurrence other than an accident that affects or could affect
the safety of operations. The FAA uses this definition for regulatory and legal purposes. Similar
distinction may be found in police or environmental reporting.
The third meaning is a more common understanding of the word “incident,” in which a
situation occurred that could have led to loss of, or disruption to, an organization’s operations,
services, or functions, but it was prevented in time from doing so. These situations are of most
interest in this synthesis.
The fourth meaning reflects a positive interpretation of the term “incident” and is only
recently gaining recognition as a measure of an organization’s safety culture. It counters the
general negative view of an incident as being a bad situation or having a negative outcome by
recognizing events that positively reinforce low risk or safe behaviors. The argument is made that
incidents and safety should refer to not just the absence of something or a lack of deficiencies,
but also to the presence of something that indicates safe activity is occurring (Hollnagel 2008,
p. 75). Hollnagel further suggests that if safety is viewed as the absence of unacceptable risk, it
results in looking for measurements and indicators such as adverse events and work loss days.
But if safety is seen as something positive or the presence of something, then measurements and
indicators need to reflect the presence of safety, such as training and meetings conducted and
the number of observations or incidents reported.
This synthesis is primarily focused on the definition of incident as
described in meanings three and four stated previously, though it reports The definition of incident that best
on practices that may include meanings one and two. The definition of describes its use in this synthesis is
incident that best describes its use in this synthesis is “something that “something out of the ordinary.”
is out of the ordinary.” This basic definition allows for the capturing of
data that can be positive, neutral, and/or negative, and can be centric to
any topic of choice, safety, business, risk, information, security, IT, and
so forth. The word “incident” then takes on a plain data connotation that once analyzed can
be used as a window for viewing the operation of an airport, whether it is safety, business,
or any other element being analyzed. The challenge for airport organizations then becomes
what data to collect and pay attention to, given the numerous types of incidents that could
be analyzed.
Survey responses [Questions 5.a. and 5.b.] that illustrate the differences in the definition of
incident are as follows:

• Any mishap, behavior, error, deviation, or action that could or has caused a hazard, injury,
or accident.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

16   Airport Incident Reporting Practices

• An event with an adverse effect on an asset of the organization (person, property, environ-
ment, financial).
• No definition available.
• Anything occurring outside of standard.
• An occurrence which affects safety which may or may not involve damage or injury.
• An event of importance that involves uncertainty pertaining to the safety and security of
the airport, that caused or could have caused injury or property damage, and that provides
an opportunity for improvement in the way of preventative measures and/or exploiting
opportunities.
• An event requiring response.
• No formal definition, generally is any unusual occurrence with a potentially significant
adverse impact to safety, finances, or reputation of the authority.
• An occurrence that may lead to a hazard and/or loss.

In trying to define one word, other words are incorporated that also require a definition to
arrive at a common understanding. For instance, if the definition of an incident included words
or terms such as “mishap,” “behavior,” “adverse event,” “error,” “deviation,” or “action,” questions
to ask include the following: Are those words and terms also understood? Is there agreement
on what is meant by safety and risk? What constitutes a deviation or shift in allowable business
performance?
For this reason, the literature review identifies the need to define each term and then to
train individuals so they know what the trigger points, behavior thresholds, hazards, and
other performance parameters are. If individuals are not properly trained, underreporting of
incidents is likely to occur. One airport interviewed indicated its definition of incident is
currently under review because of misinterpretation of what is meant by “adverse effect.” The
airport recognizes the negative connotation of the term and is seeking to make incident reporting
more encompassing of something out of the ordinary.

Defining Near Miss/Close Call


The definition of an incident, as used in this report, includes data
collection associated with near misses and/or close calls. The two
Near Miss:
terms are synonymous and this report uses near miss as the default
An event, observation, or situation that term. The definitions found in the literature for near miss tend to be
possesses the potential for improving safety-centric.
a system’s safety and/or operability by
Similar to the term “incident,” there is no single, agreed-on definition
reducing the risk of upsets, some of which
of a “near miss.” For this report, a near miss describes an event, observa-
may eventually cause serious damage
tion, or situation that possesses the potential for improving a system’s
or disruption.
safety and/or operability by reducing the risk of upsets, some of which
may eventually cause serious damage or disruption (Oktem et al. 2010).
This definition includes the concept that incidents affect an organiza-
tion’s overall capabilities and they have process safety management implications that often require
corrective action. Near-miss data reporting can provide insight into accident precursors, potential
major adverse conditions, and business disruptions.
A number of organizations worldwide support near-miss reporting. While different wordings
for a near-miss definition are used in various industries (see Appendix B), the Center for Chemi-
cal Process Safety (CCPS) indicates that no matter the definition, a near miss has basically three
essential elements (CCPS 2011):

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Terms and Definitions   17  

1. An event occurs, or a potentially unsafe situation is discovered.


2. The event or unsafe situation had reasonable potential to escalate.
3. The potential escalation would have led to adverse impacts.

Importance of Near-Miss Incident Reporting


The importance of including near-miss events in an incident reporting system is that they
can be interpreted as either lagging or leading indicators, depending on how the investigator
or safety manager measures and interprets the information (Inouye n.d., pp. 14–18). A near
miss is discovered after it occurs or after something breaks, and therefore is reactive and viewed
as a negative lagging indicator. However, if the occurrence is used to identify weaknesses in a
safety management system and used to improve organizational safety performance, then it can
be considered a positive leading indicator (Hinze et al. 2013). Viewing near misses as a positive
leading indicator, rather than a negative indicator, can improve perceptions of incident reporting
(Toellner 2001, p. 45).
The question of whether a high number of near-miss or incident reports indicates an orga-
nization is safe or unsafe is not easy to answer. An airport organization that receives a high
number of incident reports could reflect an established reporting culture (described in Chapter 5).
This would be in contrast to an organization that does not have an active reporting culture and
receives no incident reports. Of importance is that the number of incidents reported is not
a measure of safety, but rather an opportunity for an organization to learn and improve its
function and safety.
Near misses often go unreported because the risks are generally perceived to be low or incon-
sequential, or because few problems are perceived to exist in the organization. These perceptions
can result in a general attitude of complacency or false sense of comfort. An incident reporting
system can help an airport to counter complacency by raising awareness of hazards and risks,
and provide a basis for continuous improvement in processes and learning. To enhance safety
and operations, organizations can structure their policies, procedures, operations, training, and
other activities in ways to prevent accidents from occurring. Common methods used are to
establish what are known as protections, margins of safety, triggers, barriers, and/or defenses.
The establishment of parameters is used to mitigate and separate employees, users, and the
organization from safety-centric harm, damage, and loss, or provide indicators of drift from an
enterprise-centric goal or risk condition.
In addition to the perceived low risks mentioned previously, the literature search found a
number of other reasons for why near misses often go unreported:
• Fear of management or peer disapproval.
• Not wanting the incident on their work records.
• Dislike for the red tape involved.
• Difficulty in filing a report.
• Not wanting to lose time from the job assignment.
• Reluctance to spoil the department’s safety record.
• Not wanting to be the subject of an incident investigation.
• Lack of incentive or employee initiative.
• Reporting incidents is perceived as pointless.
• Fear of punishment.
• Fear of criminal or civil prosecution.
• Fear of embarrassment, loss of business, or loss of reputation if the near miss is otherwise
discovered or made public through Freedom of Information laws.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

18   Airport Incident Reporting Practices

• Lack of awareness that a near miss actually occurred.


• Lack of good judgment.

The successful implementation of an incident reporting system will need to overcome the
these barriers. Some of the means found in the literature for identifying near misses and
improving incident reporting include the following:
• Ride-a-longs.
• Safety blitzes.
• Customer testimonials.
• Call-ins by other individuals.
• Use of video technology.
• Training.
Examples of established incident or near-miss reporting programs are the Accident Sequence
Precursors Program overseen by the U.S. Nuclear Regulatory Commission; the Aviation Safety
Reporting System (ASRS) operated by NASA; the Safe Outer Continental Shelf program managed
by the U.S. Department of the Interior’s Bureau of Safety and Environmental Enforcement;
the U.S. Naval Facilities Engineering Command’s Contractor Incident Report System; and the
Close Call System administered by the United Kingdom Rail Safety and Standards Board.

Investigating Incidents
An indicator of a safe airport is not necessarily the number of reported incidents as it is how
incidents are investigated and addressed within the organization. When asked who in the airport
organization reviews or assesses incident reports, the response was widespread [Question 4.d.].
Who reviews the reports varied, depending on the nature and immediacy of the incident. Seven
airports had initial review accomplished by one individual, while eight airports indicated a review
by a small group of two to four individuals was pertinent [Question 4.c.]. One airport indicated
that a single person reviews the reports but the supervisor involved conducts the investigation.
How quickly an incident report was investigated was accomplished within 24 hours by seven
of the airports [Question 4.e.]. One airport in the survey marked investigation of an incident
within 1 hour, with most of the remaining airports stating reviews were completed within
1 to 3 days.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 3

Incidents Applied to Safety


and Risk

The definition of incident used for this synthesis has two primary meanings. The more
common understanding of an incident is safety-centric. It refers to an event, activity, occurrence,
opportunity, or similar happening that could or does relate to the safety of personnel, equipment,
or operations at an airport. Examples would be issues associated with airport certification
compliance, worker safety under OSHA regulations, emergency response by police or security,
and hazards affecting the health and welfare of the public. An SMS gives insight into this
meaning of an incident.
The other meaning of incident is enterprise-centric and relates to business and organizational
performance and risk. It is an event, occurrence, activity, opportunity, or similar happening that
relates to a particular organizational goal or performance pathway. While safety may ultimately be
impacted in this meaning, examples are more related to non-safety specifics of financial stability,
airport reputation and goodwill, development of business activity, and achieving planned goals.
An ERM system gives insight into this meaning of an incident.
An ERM recognizes that airport organizations must manage a broad array of strategic and
operational risks facing an ever-changing aviation industry, including growing financial con-
straints, increasing regulatory requirements, and general business concerns. This is in addition
to traditional safety concerns: health and operations, identification and mitigation of hazards,
and preparing for natural disasters and other emergencies. The monitoring of the broad array
of risks is achieved through an incident reporting system.
This synthesis addresses both safety-centric and enterprise-centric incidents and is described
in Figure 1. An understanding of the distinction between the two is necessary for reviewing
this synthesis report, as the methodology, indicators, and metrics are grouped and reported
separately for each. For instance, when reading about SPIs, reference is to metrics associated with
the safety-centric aspects of an incident.
While SPIs provide a picture of organization safety, it is equally important to monitor manage-
ment processes. When reading about KPIs, reference is to enterprise-centric metrics associated
with the organizational risk and performance. Both connotations can range on a continuum that
reflects a positive to negative insight on overall airport safety and performance.
There is a tendency to think of an incident reporting system as applying only to the safety-centric
side of Figure 1, instead of also applying to the organizational performance perspective. To better
understand both aspects, the following paragraphs describe how risk, risk management, and
enterprise risk management incorporate the need for, and have a basis in, incident reporting.
The basic similarity is that the collection of data is necessary for evaluation, corrective action,
and organizational learning.

19  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

20   Airport Incident Reporting Practices

Figure 1.   Description of


incident data reporting used
in the synthesis.

Risk Management
In its simplest form, risk management involves understanding, analyzing, and addressing risk
to make sure the airport achieves its objectives. Like many other terms used in this synthesis, risk
management has several variations of the basic definition, depending on how it is applied within
an organization. For instance, according to the FAA, risk management is a formalized way of
dealing with hazards and is the logical process of weighing the potential costs of risks against the
possible benefits of allowing those risks to stand uncontrolled (FAA-H-8083-2 2009, p. G4). The
FAA definition reflects an operational approach to risk. Other definitions found in Appendix B
reflect a focus on strategic, financial, insurance, environmental, or other types of risks.
Despite the different models of risk management, there is a focus on three core tasks:
(1) determine the context or system to be evaluated, (2) perform risk assessment, and (3) treat
or mitigate the risk to limit loss or disruption. Within the risk assessment task, specific risk
identification, risk analysis, and risk evaluation are performed. Figure 2 illustrates the overall
risk management process and the risk assessment steps typically found in a risk management
program or an SMS. Incident reporting is one of the basic components of a safety risk assessment
(SRA). Previously known as SRA, the term has recently been shortened to safety assessment (SA)
in FAA SMS documents.
To understand the role incident data collection and reporting plays in overall risk management,
an understanding of the terms “risk” and “hazards” is necessary.

Defining Risk
The literature search resulted in several different definitions for the term “risk.” In its sim-
plistic form, risk is the potential for an unwanted outcome. The U.S. Department of Homeland
Security (DHS) expands on the definition and describes it as the potential for an unwanted
outcome resulting from an incident, event, or occurrence, as determined by its likelihood and
the associated consequences (DHS 2010, p. 27). The U.S. Government Accountability Office
(GAO) defines risk as the effect of uncertainty on objectives with the potential for either a
negative or positive outcome or opportunity (GAO-17-63 2016, p. 1).
ACRP Report 74: Application of Enterprise Risk Management at Airports defines risks as
uncertain future events that may influence an organization’s ability to achieve its objectives
(Marsh Risk Consulting 2012). The authors of the report increase understanding of the different
aspects of risk by stating it is usually applied in one of three distinct applications:

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Incidents Applied to Safety and Risk   21  

Figure 2.   The ISO 31000 risk management


process. Source: Copyright International
Organization for Standardization (ISO). This
material is reproduced from ISO 31000:2009
with permission of the American National
Standards Institute (ANSI) on behalf of the
International Organization for Standardization.
All rights reserved. Used with permission.

• Risk as exposure.
• Risk as uncertainty.
• Risk as opportunity.

Risk as exposure reflects the more common use of the term. It carries negative connotations
in that something bad will happen, such as harm, injury, financial loss, lawsuits, or threats to
meeting objectives. Traditional airport safety-centric oversight tends to focus on hazards
exposure and the prevention of accidents. Incident reporting is used to identify exposure
to risk.
Risk as uncertainty considers both positive and negative outcomes and views it as the degree of
variance between anticipated outcomes and actual results. This reflects the traditional financial
and insurance aspects of risk and incident management.
The last, risk as opportunity, implies that a relationship exists between risk and return. The
greater the risk, the greater the potential for return, but also the greater the potential for loss.
This observation of risk is found in the strategic planning and enterprise-centric opportunity
aspects of managing an airport.
These categories view risk as the degree of variance between anticipated outcomes and actual
results, both positive and negative. The concept of an outcome as positive or negative and
having a variance from an intended path or objective helps explain how incident reporting
applies to ERM.
Other variations of the term “risk” stem from how it is used or applied in different areas of a
business. A common lexicon will categorize risk into financial, strategic, business, operational,
security, or similar types of risk. The categorizations help distinguish the risk types referenced
and illustrate how risk manifests itself at different levels of an organization, from the strategic,
to the operational, to the tactical, and to the individual levels. The categorizations also display

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

22   Airport Incident Reporting Practices

how within an airport organization risk can be isolated into functional, programmatic, or
organizational silos.

Hazards
Hazards are associated with risks. As with other definitions, context plays a role in how a
hazard is defined. The FAA’s Risk Management Handbook describes a hazard as “a present
condition, event, object, or circumstance that could lead to or contribute to an unplanned or
undesired event such as an accident” (FAA-H-8083-2 2009, p. G3). A hazard could also be
construed to lead to an unplanned or undesired event, such as a loss of reputation or the failed
attainment of a strategic goal. Not referenced in the definitions of hazard is the evolving notion
of hazard as an opportunity for the organization to learn how to improve. That is where incident
reporting and an ERM are of value.

Enterprise Risk Management


The objective of ERM is to understand an organization’s portfolio of top risk exposures,
which could affect the organization’s success in meeting its goal (GAO-17-63 2016). ERM is
designed as a tool to support the achievement of an organization’s mission, goals, and objectives,
and to help airport organizations improve their decision-making capabilities. It accomplishes
that goal by allowing management to view risks across the whole organization. For it to do so,
the organization needs to capture data and monitor variations or incidents that could lead to
new directions or to misdirection. That is the role of incident reporting. ERM is also intended
to help integrate the functional, programmatic, or organizational silos often found at medium
to large hub airports.
Traditionally, risk management considerations tend to fall within a single unit or area of an
organization, with little sharing of information among other units. This is known as the silo
effect. However, activities of one business unit, functional area, or program can and do affect the
activities of another. For instance, the departments of finance, insurance, planning, engineering,
maintenance, operations, marketing, and ground transportation may all have their own unique
risk management attributes. Those attributes may not be shared or compatible with the attributes
of other departments, yet a dependency and interdependency can exist between them.
The field of ERM is still evolving from its basic formation when it was developed around 1985.
Its purpose was to combat fraud in the financial and capital markets. Only in the last decade
has it entered the lexicon of today’s safety and risk managers at airports. At this time, ERM is
utilized at relatively few airports. The authors of ACRP Synthesis 30: Airport Insurance Coverage
and Risk Management Practices, published in 2011, reported the following (Rakich et al. 2011,
p. 16): “None of the interviewees defined ‘enterprise risk management.’ One interviewee said,
‘I have yet to meet anyone who actually practices enterprise risk management,’ and that it was
not easy to do so because uninsurable risks were often addressed by line managers, not by the
risk manager.”
A year later in 2012, ACRP Report 74 was published (Marsh Risk Consulting 2012). Only three
airports were identified in ACRP Report 74 as practicing ERM. For this synthesis report, two of
the same airports were involved and five additional airports were found to be practicing ERM
[Question 1.a.]. Interviews with the airports indicated different levels of implementation, with
none fully developed at this time. With ERM’s slow integration into airport organizations,
it became apparent during interviews that the progression of an ERM program was due in
part to a champion existing within the organization who was versed in the requirements and
metrics of ERM.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Incidents Applied to Safety and Risk   23  

Similar to the numerous definitions for risk, ERM has several definitions (see Appendix B).
ACRP Report 74 describes ERM as a holistic approach and process to identify, prioritize,
mitigate, manage, and monitor current and emerging risks in an integrated way across the
breadth of the enterprise (Marsh Risk Consulting 2012, p. 53). In July 2016, the U.S. Office of
Management and Budget (OMB) issued an update to Circular A-123 requiring federal agencies
to implement ERM to better ensure their managers are effectively managing risks that could
affect the achievement of agency strategic objectives (OMB Circular A-123 2016). OMB defines
ERM as a discipline that addresses the full spectrum of an organization’s risks, including
challenges and opportunities, and integrates them into an enterprise-wide, strategically aligned
portfolio view (OMB Circular A-123 2016, p. 9). The OMB circular requirements filter down to
airports through the grant acceptance and assurance processes.
An ERM is intended to overcome the limitations of the departmental silo effect by providing
a top-down view of risk interactions throughout the organization. For instance, changes in
purchasing policies to limit certain negative risk exposure could increase an airfield operations
department’s risk exposure and jeopardize compliance with airport certification requirements.
An example would be for a purchasing department to minimize the number of sole-source
providers by requiring at least three bid proposals for an intended procurement. If the policy
affects the length of time and cost for airfield operations to acquire aircraft fire fighting foam in
a timely and cost-effective manner, emergency response capability and compliance with 14 CFR
Part 139 could be jeopardized.
An ERM depends on evaluating various drivers (or indicators) that affect the organization.
The drivers can be external, internal, or from the various enterprises. Identifying risks and then
balancing the risk controls with a degree of flexibility are important to an organization’s overall
ERM. Table 3 lists sample risk categories and related drivers, or KPIs. An incident reporting
system would involve data collection for identifying performance trends or possible regulatory
compliance for each of the key risks.

Table 3.   Enterprise risk management: sample key risk categories.

Source: Yip and Essary 2010. Used with permission.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

24   Airport Incident Reporting Practices

The ERM process is described in ACRP Report 74 as “a continuous process that involves the
identification and prioritization of risks and opportunities and the implementation of actions to
mitigate top risks and capture opportunities” (Marsh Risk Consulting 2012, p. 24.) An ERM is
based on the concept of continuous improvement and reflects the quality control process of the
Plan-Do-Check-Act (PDCA) cycle. The outer circle of Figure 3 reflects a basic decision-making
process. Incident reporting is part and parcel of basic decision-making, as it is a key element of
the risk identification phase. It is one of the tools used to identify hazards, risk, and opportunities
at an airport. Incident data collected in the risk identification stage are then used in the safety
assurance phase as a check on the risk assessment mitigation strategies to ensure the intended
corrective results have been achieved.
For organizations that use an ERM program, the use of incident reporting is an important
element, as it enhances organizational decision-making. In both ACRP Synthesis 30 (Rakich et al.
2011) and ACRP Report 74 (Marsh Risk Consulting 2012), ERM conceptually requires airport
management to proactively anticipate the significant risks and opportunities their airport faces
and to develop response and resource plans in advance. An incident reporting system is a primary
tool for fulfilling the risk assessment process for proactively anticipating risks and opportunities.
A critical element of incident reporting is to identify risks. The term “reporting” also signifies
the importance of communication to others as a means for continuous improvement. For ERM,
communication and consultation are important, as it is necessary to monitor an organization at
different levels to ensure incidents and risks do not become silo-constrained.
Incident reporting under an ERM is designed to recognize change in the airport’s risk
profile. A risk profile is basically an overall picture of issues that may introduce risk and change
at any level of the organization at any time. A change can be negative or positive, be qualitative
or quantitative, and have certain or uncertain consequences. With the sharing of incident and
risk information, trust within and outside of the organization is developed. Figure 4 illustrates
the numerous stakeholders that can be affected by an incident or change in the organization.

INCIDENT
REPORTING
-Making
sion Ris AND DATA
Deci k Ide
ith nti
ew
fic
at
COLLECTION
rat io
eg n
t
In

Aggregate
Identify all
results &
risks
integrate
rt
epo
sure, Monitor and R

INCIDENT
REPORTING Airport
Risk Prioritizatio

AS A BASIS Review results, Goals,


Assess/analyze/
report, and Objectives,
FOR USE OF monitor and evaluate risk
PERFORMANCE Strategies
INDICATORS
Mea

Review
Develop risk
control
responses
effectiveness
Ri

k l
s

Re
s

ro
sp
on nt
se t Co
Pla en
nnin urr
g of C
Review

Figure 3.   Importance of incident reporting system in an enterprise risk management


program. Source: Marsh Risk Consulting 2012. Used with permission.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Incidents Applied to Safety and Risk   25  

Figure 4.   Stakeholders affected by an incident or accident in the


organization. Source: Freibott 2013. Used with permission.

To properly implement an ERM at an airport, the literature indicates the following require-
ments are necessary:
• Encouragement and promotion of risk, hazard, and incident reporting.
• Assessment of risks.
• Need for management, staff, and employees to understand their roles in ERM.

In understanding one’s role in ERM, the literature suggests that training programs need to
do the following:
• Support the reporting of risks, hazards, and incident reporting.
• Provide a culture of open feedback.
• Develop a risk-aware culture that enables all persons to speak up and then be listened to by
decision-makers.
• Encourage the sharing of risks and incident information.

Scorecards and Dashboards


To help manage the information provided in an ERM, scorecards, dashboards, and indicators
are common tools for understanding the parameters chosen by the airport to evaluate and
monitor incidents. At a glance, each conveys important performance information.
Dashboards and scorecards allow airport organizations the ability to visualize and track key
incident management, safety, or performance measures. Dashboards can be adapted to all
levels of the organization to display strategic, tactical, and day-to-day operations. Scorecards are
usually associated with more strategic initiatives and are used primarily at the executive levels.
Examples of dashboards are provided in several case examples found in Chapter 9. Available on
the internet are a number of spreadsheet or other program templates for creating dashboards.
In the survey, seven airports stated they have a dashboard or other benchmark method to
display their incident data [Question 4.g.]: Atlanta, Dallas-Fort Worth, Houston, Portland,
Seattle, Columbus, and Sarasota (see Chapter 9).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 4

Breadth and Depth


of Incident Reporting

Incidents at airports occur every day, but most are not recognized as being important or
they do not rise to a level that results in their being recorded or documented. For those airport
organizations that do collect data on incidents, comparison of the data with other airports is not
yet a common practice, as variations exist in how airport operators may assess or classify the
importance of incident data. The following section provides an overview of the importance of
incident reporting.

Incident Reporting as a Tool


Incident reporting can be viewed as a tool that is used to look negatively for potential prob-
lems within an organization, or can be viewed positively to catch emerging risks and to ensure
work efforts stay on track within boundaries or margins of safety and performance. A high-level
depiction of an incident reporting system can be described as being composed of three phases:
data collection, data analysis, and subsequent learning (Figure 5).
There is a tendency to think of an incident reporting system as applying only to the detec-
tion portion of the model presented in Figure 5. A further tendency is to think of incidents only
from the safety-centric perspective. This synthesis identifies an enterprise-centric perspective of
organization risk and performance as being just as important. This makes the learning portion
of the model important from the standpoint that incident data are analyzed and reported back to
the organization so improvements can be made to the organization, rather than to just prevent
a future incident. Organizational learning is a necessary activity for enhancing organizational
effectiveness.
One purpose for incident reporting is to identify hazards and risks that need to be examined
closely and to determine the need for further investigation and analysis. From a safety-centric
standpoint, an incident report helps trigger an inquiry that will improve the safety, health, and
performance of an organization. Incident reports do not necessarily need to have much detail.
The initial goal is to get people to report something out of the ordinary and to obtain objective
and neutral information. Investigation and analyses will help identify the value of the data.
The breadth and depth of an incident reporting system are best deter-
mined by each individual airport. Some airports may take the position
that the primary objective is to monitor, disseminate, and record for
analysis only critical or potentially critical safety occurrences. Other
The breadth and depth of an incident
airports extend incident reporting to the collection and monitoring of
reporting system are best determined
the normal flow of day-to-day defects or incidents. Other airports may
by each airport.
identify the need to collect data related to variances in customer service
or business processes. In either case, the reporting system requires clear

26

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Breadth and Depth of Incident Reporting   27  

Figure 5.   High-level overview


of an incident reporting system.
Source: Hewitt 2011. Used with
permission.

definition and guidelines on what is being sought so employees know what data are necessary
to report and be collected.
The basic components and flow chart of an incident reporting system are similar to that of the
safety reporting system illustrated in ACRP Report 131: A Guidebook for Safety Risk Management
for Airports (Neubauer et al. 2015) and revised in Figure 6.
In the survey, when asked if there was “a formal Hazard Incident & Risk Mitigation (HIRM),
Incident Management System (IMS), or similar program that collects SPI, KPI, hazard or
incident data,” seven of the 11 airports responded affirmatively [Question 1.c.]. However,
responses to other questions on the survey established that the reporting systems are basic and
not well developed at this point. For instance, mandatory reporting was established for eight
airports. Interviews indicated the mandatory reporting generally applied to 14 CFR Part 139,

Figure 6.   Sample incident reporting flow chart for a large airport.
Note: SPI = safety performance Indicator; KPI = key performance
indicator. Source: Adapted from Neubauer et al. 2015. Used with
permission.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

28   Airport Incident Reporting Practices

OSHA, and environmental reporting. A formal voluntary reporting system for near misses or
similar incidents was evident at only six of the airports.

Incident Reporting and Threat and Error Management


Within the air traffic and air carrier sectors of aviation, the term “threat and error management”
is used in the same way that incident reporting is commonly used elsewhere. TEM has its
origins in the human factors (HF) movement and how HF is tied to organizational and system
performance.
Threats are events or conditions that exist in an operation, similar to the holes and gaps
identified in popular accident causation models attributed to James Reason (Reason 1997) and
Christopher Hart (Hart 2004). Threats can be viewed as being comparable to hazards or incidents.
Errors refer to the human component and are the actions or inactions of personnel at any level
of the organization. The behaviors result in deviations or variations from an intended path or
an expected outcome, similar to the concept of ERM.
A third component of TEM is undesired states, which are the result of poor management of
the threats and errors. Poor management can lead to situations where the margins of safety are
reduced and a major incident or accident is about to happen. Or, poor management can lead
to low organizational performance outcomes. In these cases, poor management refers to both
individual and organizational management of the threats and errors. The two main resolves of
TEM are threat and error reduction and/or threat and error containment. Reporting threats and
errors is fundamental to future error and incident prevention.
A basic tenet of TEM important for incident reporting is that of non-punitive reporting of
errors. TEM practices evolved from the notion of a reporting culture, as espoused by Reason
(Reason 1997). A reporting culture is an important and necessary component of a successful
incident reporting system. Reporting culture is discussed further in Chapter 5.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 5

Encouraging Incident Reporting

One challenge of implementing an incident reporting system is gaining worker trust and
confidence, as they are on the front line of incident occurrence. Concerns are expressed in the
literature research as to whether employees, users, or the public will actually report themselves
as committing safe or unsafe acts, or report other incidents that can be precursors to good safety
practice, accidents, or violations. Both voluntary and mandatory incident reporting processes
are affected by the self-report possibility if the system is not structured to address the concerns
of those who submit reports. Hartsfield-Jackson Atlanta International Airport has a confidential
reporting system in place (see Chapter 9).
Two common reasons cited in the literature for why incidents are not reported are failure to
understand the importance of reporting or the potential harm that could result by not reporting.
But there are other barriers to the reporting of incidents and near misses. Bridges (2000) identified
nine barriers to overcome when establishing an incident reporting program and grouped them
into four topic areas:
1. Potential recriminations for reporting
a.  Fear of disciplinary action
b.  Fear of peer teasing
c.  Fear of an investigation involving the concern
2. Motivational issues
a.  Lack of incentive
b.  Management discouraging near-miss reports
3. Lack of management commitment
a.  Sporadic emphasis
b.  Management fear of liability
4. Individual confusion
a.  Confusion as to what constitutes a near miss
b.  Confusion as to how it should be reported
ACRP Legal Research Digest 19: Legal Issues Related to Developing Safety Management Systems
and Safety Risk Management at U.S. Airports (Bannard 2013) presents an overview of the legal
issues associated with voluntary incident reporting, including discussions on confidentiality and
just culture. In addressing the possibility of civil or criminal litigation, ACRP Legal Research
Digest 19 confirms that an organization seeking to establish a just culture will only be able to
protect reporters and those persons named in such reports to the extent permitted by applicable
laws (Bannard 2013, p. 11).
Because there is often a negative stigma attached to the word “incident,” attempts have been
made by various organizations to convey a less negative connotation and to promote better

29  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

30   Airport Incident Reporting Practices

reporting. Terms such as “hazard reporting,” or even “operational experience feedback,” have
been used. In one study, a hospital wanting to overcome the negative connotation of the term
“incident” and to encourage the filing of reports, established its program as “Condition Report-
ing” (Volz et al. 2012). Using the phrase “Condition Reporting” was perceived to provide a more
favorable avenue for employees and others to make observations and reports that could reflect
either a positive, neutral, or negative state. The hospital’s condition reporting program has all
the elements of a regular incident reporting program. The change in the word from “incident” to
“condition” was shown in the study to have resulted in an increasingly open and healthy culture,
a lower threshold for reporting conditions, and a decrease in higher severity events after one year
of implementation (Volz et al. 2012, p. S100).

Safety Culture
The accurate reporting of incidents can be influenced by a number of factors. Two such factors
are an organization’s safety culture and the actions of supervisors who enforce reporting
requirements.
Reason (1997) presented several concepts on accident prevention and managing organizational
risks. A prime concept espoused is that organizations need to create an overall safety culture.
A safety culture is one in which data are collected and people are informed about the risks
and hazards they may encounter; individuals are encouraged to (and do) report incidents that
occur, for which there is no recrimination (with some exceptions); and employees learn and
change in response to the incidents. These concepts were identified as an informed culture,
a reporting culture, a just culture, and a learning culture. Collectively, they make up an organiza-
tion’s overall safety culture.
Reporting and just cultures are important for an incident reporting system to work properly.
Mentioned previously, barriers exist to the proper reporting of incidents. The literature research
indicates that organizations with a positive safety culture, and in particular a just and reporting
culture, will experience a higher level of incident reports because individuals are less fearful of
the outcomes and understand better the value of reporting incidents (Hohnen and Hasle 2011).
One airport in the synthesis study experienced an increase in the number of reported incidents
as a result of its training efforts. Its assessment was that individuals did not previously know what
they were to report or how to report it. Upon completion of the training, the airport started to
see an increase in the number of incident reports.
A reporting culture is one in which an atmosphere of trust exists. People are encouraged, and
even rewarded, for providing essential safety-related information. If the connection between a
safety manager’s attitude and incident reporting is viewed as positive, then trust is established
and incidents get reported. If not, then trust is affected, which can lead to fear of reprisal and
reduced reporting incidents can occur.
A just culture establishes that employees are still to be held accountable for reckless or deliberate
actions, but they are not to be unduly punished for unintentional errors (Reason 1997). Macrae
(2016) states that rather than assigning responsibility for causing failures, incident reporting
should assign responsibility for improving systems. This key concept focuses on getting away
from using incidents for punishment and moving people toward working to improve the
organization and being rewarded for the effort.
Illustrative of a reporting and just culture is a research study by Okuyama et al. (2010). Their
study examined the relationship between nurses’ perceptions of incident reporting, the fre-
quency of incident reporting on wards, and safety management in hospitals. They concluded
that on hospital wards where staff and safety managers discuss incidents and their root causes,

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Encouraging Incident Reporting   31  

staff were less fearful of incident reporting, understood the significance of incident reporting,
and reported incidents more willingly.
Illustrative of a just culture is a study by Probst and Estrada (2010). They reported that under-
reporting of accidents and incidents can be predicted both by perceptions of an organization’s
safety climate and the degree to which supervisors enforce safety policies. Their study concluded
that when employees perceive their organizational safety climate to be positive, they engage in
far more reporting than if it were perceived negatively. The authors further noted that when
employees report having supervisors who enforce safety policies, they not only experienced far
fewer accidents, but they also fully reported all of those accidents. On the other hand, among
employees who perceived a poor safety climate and/or lax enforcement, the ratio of unreported
to reported accidents was greater than 3:1 (Probst and Estrada 2010, p. 1443).
In addition to reporting and just cultures as ingredients for a successful safety and incident
management system, Reason (1997) also identified the need for informed and learning cultures.
An informed culture is one in which leading and lagging indicators of safety performance are
collected, analyzed, and disseminated. A learning culture infers that the airport organization
reviews safety trends and incidents, changes processes and practices, and trains employees
to improve their efficiency and effectiveness. The goal of an incident reporting system is to
learn from the investigation and analyses of them. As Macrae (2016, p. 72) states, “Repeated
reports of the same type of event suggest a strong culture of reporting but a poor culture of
learning.”

Incident Reporting Practices and Culture


A 2011 report by the Transit Rail Advisory Committee for Safety has application to airports.
In describing a safety culture, it states, “A safety culture is one that collects the right kind of
information, analyzes and disseminates that information, learns from its mistakes, and treats its
employees fairly” (Transit Rail Advisory Committee for Safety, 2011, p. 3). That is also a descrip-
tion for an incident reporting program. The report describes how effective safety management
systems use data-driven performance management practices and independent audits to drive
continuous improvement of safety. It further describes effective practices as follows:
• Leading indicators of safety performance, safety culture, and accident precursors are defined,
measured, and monitored.
• All employees understand the value of collecting and reporting data to support risk analysis,
address unsafe conditions, and prevent accidents.
• Reliable data are collected on operational performance, safety, maintenance, near misses, and
training. Systems are in place to analyze trends, track and report data, and guide decisions.
Variations from expected outcomes are reviewed to understand where the organization is
failing and what corrective action is necessary to restore performance.
• Performance measures based on industry standards are cascaded through the organization
so everyone is clear about fulfilling strategic safety goals. The performance measures are used
to continually encourage all levels of the organization to reduce the risk to the agency.
• The organization uses performance measures to evaluate the effects of new programs and
processes on safety.
• A hazard analysis process is in place for identifying safety issues and concerns, including those
associated with human factors and changes to operations or equipment. Data are analyzed
to provide possible policy, process, or equipment modifications to eliminate or mitigate
hazards.
• A reporting system is in place that allows employees to report important close calls/near
misses and unsafe conditions to a neutral third party without retribution.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

32   Airport Incident Reporting Practices

• Capabilities for swift learning, flexible role structures, and quick situational assessments are
developed to mitigate risk impacts.
• A hierarchy of controls are identified and clearly understood.

General characteristics and best practices of incident reporting systems culled from the
literature search include the following key attributes:
• The organization supports and encourages a culture of hazard and incident reporting.
• Reporting is made easy and received from a broad range of sources.
• Individuals participate in the reporting process.
• The reporting system is non-punitive and protects the privacy of those who make reports.
• A structured process is in place for reviewing and investigating incidents, identifying root
causes and the weaknesses in the system, and developing action plans.
• Feedback is provided to the person making the report, if the report is not anonymous.
• Investigative results are used to improve safety systems, hazard control, risk reduction, and
lessons learned through training and continuous improvement.
• Information or summaries of investigations are disseminated in a timely manner as part of
the feedback and culture process.

Culture Survey
The safety culture in the healthcare industry is often measured using the U.S. Agency for
Healthcare Research and Quality (AHRQ) Patient Safety Culture Survey. The AHRQ is the
lead federal agency charged with improving the safety and quality of America’s healthcare
system. By using a standardized survey, hospitals, medical offices, and/or other similar health
unit organizations are able to better benchmark their capabilities among each other.
The literature search found that there are quite a few culture surveys available on the web, but
there did not appear to be a standard culture or climate survey developed solely for the airport
industry. Airports Council International (ACI) has administered to its members a culture survey
based on SMS principals, but that survey included aerodrome operators, airlines, and ground
handlers worldwide (ICAO 2015). Among the surveyed airports, only four indicated they
perform or assess organizational surveys on safety climate or culture [Question 5.h.].

Different Ways to Report Incidents


The process for formal submission of reports on hazards and incidents varied among the
airports [Question 3.e.]. The use of a telephone and website were most common, followed by
e-mail and written and verbal notification [Question 3.f.]. Social media use and a suggestion box
were available at half of the airports.
Ten of the airports indicated on the survey that the general public is able to report acci-
dents, incidents, hazards, or near-miss situations [Question 2.b.]. However, the methods and
con­venience of reporting by the general public varied. Interviews established that many of the
airports allowed the general public to make a report through the web, but some websites were
not easy to find or were not specifically for incident reporting but rather for general comments.
Anonymous reporting was not the norm. The Sarasota Airport Authority allows for anonymous
reporting through its courtesy phone and comment box (Figure 7). Hartsfield-Jackson Atlanta
International Airport has a confidential reporting system in place (see Chapter 9). Having a
prominent website button or physical indication of where and how to report a safety issue is
considered a best practice.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Encouraging Incident Reporting   33  

Figure 7.   Two ways the Sarasota-Bradenton Airport


facilitates public and employee reporting of incidents.
Photo credit: Author.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 6

Organizational Performance
Indicators

The management of safety within an organization is accomplished through the people


that make up the airport organization. In this regard, indicators provide information on an
organization’s ability to put into place the systems, resources, and people to achieve established
goals, whether they are safety, business, or strategic performance. Indicators are organizational
tools used for the evaluation and improvement of safety and performance. Hale (2009) states
that safety indicators can be categorized roughly into three groups. They are those that
1. Monitor the level of safety in the organization,
2. Change and develop the means of managing safety in the organization, or
3. Motivate management and personnel to take any necessary safety action.
The practice of using indicators for the continuous monitoring and analysis of processes has
been standard practice in industrial quality management since the 1930s. Only since the 1990s
have airport organizations started to adopt the quality control and assessment concepts encom-
passed today in the use of International Organization for Standardization (ISO) 9000 standards,
total quality management principles, Six Sigma, PDCA, and other quality concepts. SMS and
incident reporting systems incorporate the concept of continuous quality improvement.

Metrics, Measures, and Indicators


Terms such as “metrics,” “measure,” “indicator,” and “index” are often used interchangeably,
though distinctions do exist. As evidenced by the varied definitions shown in Appendix B, the
interchangeable use of the terms can be confusing. For this report, an indicator points toward
something. A measure ascertains the size, amount, or degree of something. It is normally a single
point of raw data, such as one incident. A metric is an objective or subjective interpretation of a
measure and generally involves a comparison of data. For instance, one incident is compared to
the number of operations. A metric can be an indicator. An index is usually a composite number of
data (i.e., all incidents that occur on the airport) and can also be an indicator. It is easy to see how
confusing it can be and why the terms often are used interchangeably. The commonality is that they
are generally used as quality assurance measurements or indicators for purposes of quantitative
comparison. Organizations are encouraged to define each specifically within their organization.
Indicators are used for a number or purposes (Stowell 2013):
• To monitor how well an organization is performing and allows for assessment of whether one
is on the right track or not.
• To raise awareness or to focus attention on a particular issue, whether business or safety.
• As part of an incentive program to promote safe attitudes and behavior.
• To help educate staff, users, tenants, and other stakeholders about an organization’s progress
toward a goal attainment or safety.

34

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Organizational Performance Indicators   35  

One challenge for an airport organization is to select which incident


factors are important to monitor. A secondary challenge, especially One challenge for an airport organization
for small and GA airports, can be the difficulty in determining an is to select which incident factors are
indicator’s value, given the infrequent or general lack of adequate
important to monitor.
data. This is in spite of the fact that similar hazards and adverse work-
ing conditions exist at any sized airport. At medium and large hub
airports, a higher frequency and amount of data can be collected and
analyzed statistically. Appendix C provides an example of statistical data on several airport
incidents. Statistical data are generally not reliable for smaller airport operators.
The number of incidents reported is not a measure of safety. Incident data can be used in the
risk analysis process to help develop more realistic estimates of failure probabilities. Data can
also be used in root cause analyses by improving the development of fault trees or similar factor
analyses, as shown in Figure 8. In the survey, nine airports indicated they use root cause analysis
as part of their incident investigation and analysis [Question 4.f.].
Similar to the term “incident,” there are many definitions found in the literature for the
term “indicator.” ACRP Report 1: Safety Management Systems for Airports, Volume 2: Guidebook
defines SPI as any measurable parameter used to point out how well any activity related to safety
is performing over time, and to assess the overall SMS health indirectly (Ayres et al. 2009). The
KPI is a business management term referring to the measures that monitor the performance of
key result areas of business activities. KPIs represent a set of measures focusing on those aspects
of organizational performance that are the most critical for the success of an organization
(Bellamy and Sol 2012).
As airports in the United States and elsewhere start to implement SMS, the need to establish
SPI and KPI metrics will become more evident. The OECD (2008, p. 9) identifies and explains
the following seven steps for developing SPIs:
1. Establish the SPI team.
2. Identify the key issues of concern.

Figure 8.   Sample hazard identification using Ishikawa fishbone


root cause analysis for a construction project. Source: Sabet et al.
2013. Used with permission.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

36   Airport Incident Reporting Practices

3. Define outcome indicator(s) and related metrics.


4. Define activities indicator(s) and related metrics.
5. Collect the data and report indicator results.
6. Act on findings from safety performance indicators.
7. Evaluate and refine safety performance indicators.
The same document provides examples of SPIs for public authorities, and provides a menu
of possible outcome indicators, activity indicators, metrics, and targets for those seeking
guidance in establishing a program.
An incident reporting system contributes to the basis for establishing a metric. These metrics
would indicate, among other things, whether or not the organization is following its SMS
practices and is within acceptable safety parameters. Indicators can also provide safety, risk, and
operational managers with the ability to analyze, evaluate, and provide oversight of any process
improvement. Ultimately, the indicators will be a reflection of the culture of the organization.
Stowell (2013, p. 45) sees the challenge for safety practitioners as, “to have at their disposal a
balanced suite of indicators and to judge what are the right ones to use at the right time, in order
to ensure risk management is both robust and effective.”
To establish an indicator, a decision must be made as to what data will be collected and
measured. For instance, a key indicator of management’s commitment to safety is the adequacy
of resources.
The management of safety relies on overall anticipation, monitoring, and development of
organizational performance. KPIs and SPIs play a key role in providing information on current
organizational safety performance. If indicators inform a manager about possible organizational
practices and processes that precede changes in the performance of the organization, whether
they are safety-related or for attaining strategic goals, they are considered to be leading
indicators. The role of the safety performance indicators is to provide information on safety,
motivate people to work on safety, and contribute to changes affecting increased safety
(Reiman and Pietikäinen 2012, p. 1999).
Mathis (2009) suggests five areas of metric measurement that can impact safety results.
Those areas are:
1. Safety activities (i.e., training, safety meetings, supervision).
2. Participation (i.e., safety teams or committees, attendance at training sessions).
3. Perceptions (i.e., what do people think of safety and efforts to prevent accidents).
4. Behaviors (i.e., observable precautions or risks taken).
5. Conditions (i.e., physical audits of unsafe workplace conditions and potential hazards.
The surveyed airports were found to use one or more of the metrics listed (see Chapter 9).

Safety and Key Performance Indicators


KPIs and SPIs have slowly entered into the airport lexicon and practice since first being
introduced in the mid-1990s. KPIs are sometimes confused with SPIs. KPIs are associated with
organizational performance that may or may not be safety-related. KPIs are enterprise-centric.
In contrast, leading indicators of safety are associated with safety performance. In Chapter 3,
Figure 1 describes the relationship and use of SPI and KPI for this synthesis.
For SPIs, reference is to metrics associated with the safety aspects of an incident. While
SPIs provide a picture of organization safety, it is equally important to monitor management
processes. When reading KPIs, metrics associated with organizational risk and performance or

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Organizational Performance Indicators   37  

airport enterprises are referenced. Both indicators can range on a continuum that reflects a posi-
tive, neutral, or negative insight on overall airport safety and performance.
Indicators are tools that can be used for a number of different purposes:
• To gain information on safety levels and the efficacy of any safety improvement efforts.
• To determine whether organizational goals are met.
• As a means to communicate safety and performance measures to the public.

SPIs and KPIs do not provide a detailed analysis, or directly suggest how to improve the airport.
Instead, they help answer questions the airport organization has about its performance. The
indicators can be used as pointers to show the following:
• Where more work is needed.
• Where goals are being met.
• Where goals may need to be adjusted.

When using SPIs and KPIs, the questions an airport organization needs to ask are
• What is it that we are trying to find indications of?
• How do we know we are successful?
• What indications tell us that the system is working as intended?

The answers to these questions vary among airports and are reasons it can be difficult to
compare KPI or SPIs between them. If a KPI does not answer a question, then it probably does
not need to be tracked.
In using SPIs and KPIs, what an airport organization hopes to identify is whether a particular
safety system or performance expectation is drifting or if it is experiencing a sudden change
in the boundaries. However, because the organizational structure of airports, their operations,
and their goals and objectives differ, the literature indicates it is difficult to compare SPIs or
KPIs for benchmarking purposes. A normalization of the data is necessary to provide some
comparison.
Medium or large hub airports are more inclined to use KPIs, as their operations are more
complex than smaller airports and they have more resource capabilities. This is supported by
responses to the synthesis survey, as 10 of the 11 respondents are medium or large hub airports.
The 11th airport is a small hub.
ACRP Report 19: Developing an Airport Performance-Measurement System makes note that
GA and small airports are generally prevented from developing and implementing a formal
performance-measurement system due to time constraints, reduced personnel, prevalence of
urgent matters over all other matters, and an organizational structure that does not seem to
need to share data interdepartmentally (Infrastructure Management Group, Inc. 2010, p. 9).

Reasons for Identifying Safety


and Key Performance Indicators
The survey asked, “For what reasons/purposes does the organization collect SPI or KPI
leading/lagging indications or otherwise monitor organizational incidents?” [Question 3.b.]
The responses were as follows:
• To identify trends in accidents and incidents, whether personnel, procedural, or mechanically
based (four similar responses).
• For compliance with regulatory requirements, including airport certification, OSHA, and
state/local codes (three similar responses).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

38   Airport Incident Reporting Practices

• To measure performance and organizational sustainability and to benchmark against itself


and other organizations (two similar responses).
• To target mitigation efforts, including enforcement of safety and security regulations (two
similar responses).
• To be more proactive in the identification and mitigation of hazardous trends.
• For insurance purposes (two similar responses).
• As part of ERM best practices and to identify opportunities for improvement.
• As part of SMS implementation and to improve safety awareness and culture.

Types of Safety and Key Performance Indicators


In a review of literature on SPIs, Reiman and Pietikäinen (2010) found SPIs described in numer-
ous ways by different users. They listed SPI uses as falling into the following types of indicators:
1. Outcome-based versus activity-based indicators.
2. Leading versus lagging indicators.
3. Input versus output indicators.
4. Process versus personnel indicators.
5. Positive versus negative indicators.
6. Technical versus human factors indicators.
Reiman and Pietikäinen (2010) then found the contexts in which the different indicator and
metric descriptions are used tend to overlap. As a result, they consolidated the different type
indicators into three types of categories:
1. Drive indicators.
2. Monitor indicators.
3. Feedback indicators.
The three indicators can be viewed in the context of a system analysis model of input, process,
and output (Figure 9). Drive indicators are those that measure input to the process. Leading
indicators are found at the input stage. Monitor indicators measure the current state or activity
of the organization. Activity or process indicators fall into the monitoring category. Feedback
indicators measure the outcome of the system. To have any of the indicators provide a true
model of measure, organizations will use an incident reporting system to capture data that
contribute to determining any of the indicators.
To add to the model, the International Civil Aviation Organization’s (ICAO) Safety Man-
agement Manual (SMM) (ICAO 2013) defines three methodologies for which an incident

Figure 9.   Model of indicator relationships


for incident reporting system.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Organizational Performance Indicators   39  

management system can be used to identify hazards. The three methodologies are similar to
Reiman and Pietikäinen’s (2010) categories. They are
1. Predictive,
2. Proactive, and
3. Reactive.
Figure 9 illustrates the role and relationship of incident reporting to the various stages of
the commonly used information processing system analysis model known as the input-process-
output (IPO). An incident reporting system needs to be designed to collect informational data
at all three stages of the IPO. The organization then chooses what data to analyze and earmarks
indicators for each stage. The types of indicators shown are reflective of Reiman and Pietikäinen
(2012) and the ICAO SMM models.
In a later research study on different metrics and indicators, Kaspers et al. (2017) found the
following indicator terms were used and compared in the literature:
• Upstream versus downstream indicators.
• Predictive versus historical indicators.
• Heading versus trailing indicators.
• Positive versus negative indicators.
• Active versus passive indicators.
• Feed forward versus feedback indicators.

Leading and Lagging Indicators


If an incident reporting system collects data reactively (meaning after-the-fact, or as a
look-back), it is described as a lagging indicator. An analogy that can be used to describe reactive
indicators is that of attempting to drive down a highway by using only the rear-view mirror.
Reactive indicators tell you of the road behind but are difficult to guide you on the road ahead.
Typical lagging indicators are incident rates, workmen’s compensation costs, incident-related days
away from work, the OSHA 300A logs, and safety-related production stoppages (OSHA 2004).
In recent years, the shortfalls of lagging indicators have been addressed by the development
of leading indicators. Leading indicators are intended to help airport officials proactively work
toward preventing accidents and illnesses, rather than reactively assessing them after harm has
been done. One way to view the relationship between leading and lagging indicators is that
leading indicators frequently require an investment to implement some activity for a lagging
indicator to detect it.
Leading indicators are intended to collect data on unsafe behaviors, unsafe acts and conditions,
poor procedures, faulty equipment, and incidents that are near misses and close calls. Collecting
and analyzing the data are useful for changing organizational or individual behavior and pre-
venting near misses by measuring what activities to do and what behaviors to address, thereby
preventing accidents, damage, injury, and illness. In the survey, only six of the airports indicated
they collect reports from observation of positive behaviors or actions [Question 2.c.].
When asked if leading indicators are used for assessing the attainment of safety or performance
objectives, seven airports stated yes, two indicated no, and two were uncertain [Question 4.a.].
For those that indicated yes, they listed the indicators or metrics used to measure the safety of
their organization:
• Safety observations and compliance adherence.
• Safety training attendance.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

40   Airport Incident Reporting Practices

• Completion rate of Hazard Identification Corrective Action forms.


• Wellness participation.
• Amount of safety training.
• Amount of near-miss reporting.
• Number of runway incursions.
• Number of surface incidents.
• Incident rates.
• Number of safety audits.
• Safety committee attendance.
• Implementation rate of safety recommendations/suggestions.
• Number of safety training activities.
• 14 CFR Part 139 and other safety related data.
An analogy for leading indicators is the dashboard of your car. It has a fuel gauge, engine
coolant temperature gauge, and an oil pressure lamp. Each gauge can portend a future risk and
possible bad outcome of an engine failure or stoppage. They are leading indicators. A lagging
indicator would be the engine stops or overheats.
Another analogy comparing the two indicators is to view the wake of a boat. You can view the
path taken but it provides limited information on where you are going. For that you need other
tools or instruments, such as a compass, a wind indicator, or radar. Those are considered leading
indicators.
Leading indicators are the tools and instruments that help an organization view the path to
be taken. It is more meaningful to monitor the events, behaviors, and conditions that can lead
to incidents or accidents, rather than just tabulating the accident or incident itself. Being
able to proactively review data that indicate safe work practices can result in anticipating
organizational vulnerabilities.
Leading indicators are known to (Step Change in Safety 2003)
• Reveal areas of weakness in advance of adverse events,
• Be associated with proactive activities that identify hazards,
• Aid risk assessment and management, and
• Complement the use of lagging indicators by compensating for their shortcomings.
In a report on SPIs in the nuclear industry, Reiman and Pietikäinen (2010) identified several
reasons for using leading indicators, as found in their literature search:
• They provide information on where to focus improvement efforts.
• They direct attention to proactive measures of safety management rather than reactive
follow-up of negative occurrences or trending of events.
• They provide early warning signs on potential weak areas or vulnerabilities in the organi-
zational risk control system or technology.
• They focus on precursors to undesired events rather than the undesired events themselves.
• They provide information on the effectiveness of the safety efforts underway.
• They tell about the organizational health, not only sickness or absence of it.

Balance Between Leading and Lagging Indicators


Lagging indicators are the historical norm for airports to use. Only in recent years have air-
ports explored leading indicators, in part due to the increased awareness of research into safety
management and changes in competitive and business expectations for airport operations.
Greater attention today is being placed on anticipating effects on safety and enterprise risk,
rather than relying on past outcomes of traditional feedback-type data.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Organizational Performance Indicators   41  

A common conclusion found in the literature search is that a balance is needed between lag-
ging and leading indicators when selecting performance measures. Dyreborg (2009) indicates
that when used together, leading and lagging indicators can provide a broader and realistic
perspective of what is or is not working in your safety system.
An example of the relationship between leading and lagging indicators is shown in Figure 10.
The number of safety training hours, if correlated, could be an indicator that reflects the number
or type of employee injuries. Injuries then have an effect on the number of worker compensation
claims. With additional training, an organization could experience fewer claims, in which case
due to the correlation, training hours can be considered a leading indicator to track and moni-
tor. Tools found in the literature search to establish leading indicators are training, checklists,
audits, observations, surveys, scorecards, hazard and risk assessments, and inspections, among
others. In the survey, seven airports stated they have a dashboard or other benchmark method to
display their incident data [Question 4.g.]. Nine airports indicated they regularly perform safety
or risk audits [Question 4.h.].
Tying performance indicators to safety culture, Blackmore (1997) states:
One purpose of leading performance indicators is therefore to show the condition of systems before
accidents, incidents, harm, damage or failure occurs. In this way, they can help to control risks and prevent
accidents. Leading performance indicators can also be used to measure the inputs that people are making to
the management process. Used in this way, leading performance indicators can have a role in promoting and
monitoring a positive culture towards improving performance.

When airport organizations establish safety policies, procedures, or practices, they also
generally establish parameters, boundaries, margins, or tolerance levels for what is safe or not
safe, or what is acceptable behavior or not. Those boundaries may not be evident to employees,
tenants, or users. If a boundary level is narrow, little room for employee deviation is allowed.
If a boundary level or margin is wide, an employee has flexibility to work through a problem. An
incident reporting system makes it easier to select the parameters, as safety issues are known.
Without the data, it is more difficult for management to identify what the safety or performance
boundaries should be, how narrow or broad they should be, or what risk controls need to be
put into place.
The question for airport operators is where and how to establish a proper boundary if
they want to balance the risk between the two extremes. For instance, having an incident
reporting system that includes information on where foreign object debris (FOD) is discovered,
how frequently it is discovered, the type of FOD discovered, and the time involved to inspect
or remove FOD can help an airport organization decide the limits of its FOD policy and
procedure.

*EPAX - Enplaned passengers

Figure 10.   Example of relationship between leading and lagging


indicators. Source: Infrastructure Management Group, 2010.
Used with permission.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

42   Airport Incident Reporting Practices

Safety Culture as an Indicator


The safety culture of an organization is considered a key leading SPI. SPI data are needed
to provide indication of where safety emphasis needs to be placed without incurring actual
accidents, damage, and harm. SPIs also help provide information on an organization’s ability to
complete that task. SPIs are organizational tools for the evaluation and improvement of safety,
and are used as part of an overall process of the airport organizations to manage safety.

Customer Satisfaction as an Indicator


An example of an indicator that could be both leading and lagging is customer satisfaction.
When used as a lagging indicator, it provides information on the current state of satisfaction
for whatever service is being measured. However, that same measure can be used as a leading
indicator to predict more interest in using the airport or an aversion to it.

Training as an Indicator
The survey response to how well people in the airport organization understood performance
indicators and hazard reporting points to a challenge for those seeking to implement an incident
reporting system, or even an SMS [Question 5.d.]. Respondents at six airports indicated that
KPI-SPI-incident-hazard reporting was understood by only a few functional areas or departments.
Three felt it was well understood and two believed it was not understood at all. This information
is an indicator of the need for training to address the issues. Only five of the surveyed airports
trained their airport employees on KPIs and SPIs [Question 5.e.]. Of the training conducted, the
most common was on-the-job or mentoring (eight airports) [Question 5.f.]. Only three airports
stated they require or provide training of incident reporting to employees of vendors, conces-
sionaires, and contractors [Question 5.g.]. The training was generally conducted as part of driver
or security badging processes, such as performed at Hartsfield-Jackson Atlanta International
Airport (see Chapter 9).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 7

Research and Resources


on Indicators and Metrics

The literature search indicates that leading and lagging indicators need to be matched to
each other when they are chosen. The Associated Builders and Contractors compiled a Safety
Performance Report among its members and found that leading indicators had a positive impact
on a company’s safety performance, as evidenced by fewer disrupted or lost lives and a safer, more
productive jobsite regardless of the size of the company (Associated Builders and Contractors
2017). Their conclusion was that companies that engage in leading indicator use are, statistically,
considerably safer than their peers.
Theoretical models help describe or explain how things work or what has worked in practice.
Indicators are usually a manifestation of those models, whether a safety model, enterprise risk
model, or similar model type. For this reason, the selection of indicators requires a determination
of the type and kind of incident data to be collected. Caution is noted in the literature research
for ensuring an indicator is actually measuring and contributing to the improvement of safety
or an organization in a sustainable way. Some indicators are best used for describing a safety or
business process, while others are better at identifying threats or barrier penetration. The way that
safety is understood within an organization strongly influences the selection and interpretation
of safety indicators (Herrera 2012).
ACRP Report 44: A Guidebook for the Preservation of Public-Use Airports (Thatcher 2011)
found that over time a dwindling reduction of an airport’s available customer services was a clear
and measurable precursor and indicator of increasing risk of airport closure. ACRP Report 44
also found reliable indicators that many airports do not have written airport business plans, and
many others do not have effective business plans. The report concluded the absence of a realistic
written airport business plan puts an airport business enterprise at risk.
An internet search using the words “key performance indicator library” and “safety perfor-
mance indicator library” will provide a number of different resource lists on KPIs and SPIs.
Appendix D provides a list of metrics developed for the SMS at the Toledo Express Airport,
Ohio, by SMQ Airport Services.
A good resource for airport operators on ERM practices is ACRP Report 74 (Marsh Risk
Consulting 2012). The report is a guidebook that summarizes the principles of ERM, its benefits,
and how it applies to airports. A CD is provided with the report that can be used to support the
ERM process, catalog identified risks in a risk register with expected likelihood of occurrence
and expected severity of impact on the airport, and generate a risk score and a risk map.

Resources
The following are summaries of several reports and studies that provide in-depth information
on various performance measures that can be used by airports.

43  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

44   Airport Incident Reporting Practices

Key Performance Indicators


1. ACRP Report 19: Developing an Airport Performance-Measurement System (Infrastructure
Management Group, Inc. 2010) provides guidance on how to develop and implement an
effective performance-measurement system for airports. It identifies a useful set of airport
KPIs, together with standard definitions and guidance on data collection and benchmarking
issues. The guide addresses performance in nearly every functional area of an airport,
including administration, human resources, properties, engineering, environment (noise/air/
water/sustainability), facility and infrastructure maintenance, finance, information technology,
legal, marketing, public relations, operations (airside/landside), and public safety (police/
fire/security).
An extensive list of KPIs is provided in both ACRP Report 19 and ACRP Report 19A:
Resource Guide to Airport Performance Indicators (Hazel et al. 2011). ACRP Report 19 provides
a compendium of key performance areas and indicators derived from workshops conducted
by the authors.
2. ACRP Report 19A (Hazel et al. 2011) is a supplement to ACRP Report 19 and provides
additional depth and detail on airport KPIs and SPIs that can be use in benchmarking
and performance measurement. The performance indicators are categorized and sorted by
functional type and their criticality to an airport strategic plan. More than 800 performance
indicators are presented in the three main categories of (1) Core, (2) Key, and (3) Other
(Figure 11). ACRP Report 19A adds to the compendium of indicators found in ACRP Report
19 by listing indicators and categorizing them by functional area and type.
3. ACRP Report 131: A Guidebook for Safety Risk Management for Airports (Neubauer et al. 2015)
provides information on conducting safety assessments and tailors the information so that it
can be scaled for smaller airports with fewer resources. SA tools and templates are provided
in the appendices, to include typical accident and incident rates (Appendix C). ACRP Report 131
also contains an extensive preliminary hazard list. For those developing an incident report-
ing system, the list includes all situations that could warrant an incident report, if properly
discovered and found to be deficient. Lastly, ACRP Report 131 contains two lists (Part 139 and
non–Part 139) of KPIs or potential KPIs airport organizations can consider for inclusion in
an incident reporting system (Neubauer 2015, pp. 199–200).
4. Another resource containing a list of KPIs is the Airports Council International’s Guide to
Airport Performance Measures (Oliver Wyman 2012). The ACI guide identifies measures for
core activity, safety and security, service quality, productivity/cost-effectiveness, financial/
commercial, and environmental areas.

Note: API = airport performance indicator.

Figure 11.   Main categories of airport performance indicators described


in ACRP Report 19A. Source: Hazel et al. 2011. Used with permission.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Research and Resources on Indicators and Metrics   45  

Safety Performance Indicators


5. The chemical industry has developed guidelines and best practices for safety performance
indicators. The American Petroleum Institute has issued Recommended Practice (RP) 754
Process Safety Indicators for the Refining and Petrochemical Industries (American Petroleum
Institute 2016). The recommendations contained in the standards are most applicable to fuel
storage areas on an airport and to OSHA reporting requirements.
6. The Organisation for Economic Co-operation and Development published a document
on selecting metrics and SPIs for the chemical industry (OECD 2008). The document has
application to other industries as well, including airports. The document assists organizations
that wish to implement and/or review SPI programs. It is designed to measure the performance
of the public authorities, including emergency response personnel, as well as organizations
representing communities and/or the public.

Leading/Lagging Indicators
7. Underwriters Laboratories (UL) developed a white paper that discusses the importance of
leading and lagging indicators in effectively managing workplace health and safety issues,
and provides a reporting framework for evaluating critical safety elements (UL White Paper
2013). The paper defines leading indicators and identifies the characteristics of good leading
indicators. It then discusses the value of using leading and lagging indicators together to
evaluate safety performance, and presents results from a UL survey of organizations that
manage workplace safety using such indicators. The paper concludes with details about the
UL Safety Scorecard, a template for tracking safety activities and performance results. A list of
leading and lagging indicators for several common workplace safety elements can be found at
https://library.ul.com/wp-content/uploads/sites/40/2015/02/UL_WP_Final_Using-Leading-
and-Lagging-Safety-Indicators-to-Manage-Workplace-Health-and-Safety-Risk_V7-LR1.pdf.

Culture
8. A good resource for helping establish a leading indicator program and for self-assessing an
organization’s culture is the American Bureau of Shipping’s Guidance Notes on Safety Culture
and Leading Indicators of Safety (American Bureau of Shipping 2014). While targeted for the
commercial marine environment, it is a useful tool for adaptation to the airport environ-
ment. It provides an outline and guidance on how to establish a safety system that includes
surveys, self-assessment, leading indicators, and safety culture administration. The resource
provides full details of methods, metrics tables, safety performance datasheets, normalization
criteria, safety culture questionnaires, safety factors, tips on administering the survey,
step-by-step guidance on statistical analysis, worked examples, and a list of desired activities,
attitudes, and behaviors, together with a list of possible activities for improvement. Examples
of leading indicators from the guidance document can be found at https://ww2.eagle.org/
en/innovation-and-technology/safety-human-factors-in-design/management-organization/
safety-culture-leading-indicators.html.

Safety Management System and Safety Management Manual


9. The Safety Management International Collaboration Group (SMICG) is a joint cooperation
between 18 global aviation regulatory authorities for the purpose of promoting a common
understanding of safety management and SMS/SSP (state safety program required by ICAO)
principles and requirements, facilitating their implementation across the international
aviation community. SMICG has published guidelines to assist service providers in the

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

46   Airport Incident Reporting Practices

definition and implementation of a set of SPIs (Safety Management International Collabo-


ration Group 2013). Other beneficial publications exist under the categories of standards,
promotion, guidance/tools, and resources.

Environmental
10. Indicators used in the environmental management and reporting area can be found in a
paper published by Hrebicek et al. (2011). It discusses the key performance indicators for
environmental management systems certified by standard ISO 14001:2005. The areas
covered include efficiency of material consumption, energetic efficiency, water management,
waste management, biological diversity, emissions into the air, and other relevant indica-
tors of the influence of the organization’s activity on the environment.
11. For environmental metrics, a study from the Global Environmental Management Initiative
(1998) surveyed members on their environmental performance measurement systems. The
results of the study were published as a primer discussing the considerations for designing
a metrics program and providing a compilation of indicators used in the industry. Key con-
cepts are also defined and explained, and the advantages and limitations of various metrics
are discussed.

Automated People Mover


12. An overview and specifics of performance measures for an automated people mover (APM)
system is provided in ACRP Report 37A: Guidebook for Measuring Performance of Automated
People Mover Systems at Airports (Lea+Elliott, Inc. 2012). ACRP Report 37A provides
summaries of performance metrics for the airlines, transit, and highway indicators and
their applicability to airport APM. The report also summarizes and suggests data collection
methods that airports can use to measure and track incidents.

Health and Safety


13. The National Safety Council (NSC) has created several guides on the use of leading indica-
tors related to environmental health and safety. One in particular, titled Practical Guide to
Leading Indicators: Metrics, Case Studies & Strategies, includes a menu list of metrics and
leading indicators managers can choose from for their airport (Inouye n.d.) The list includes
a large number of specific metrics for each of the following categories:
–– Risk assessment
–– Hazard identification/recognition
–– Preventive and corrective action
–– Management of change processes
–– Learning systems
–– Environmental health and safety management systems
–– Leadership engagement
–– Leading indicator component evaluation
–– Communication of safety
–– Safety perception survey
–– Training
–– Risk profiling
–– Compliance
–– Employee engagement and participation
–– Area observations/walk-arounds
–– Prevention through design

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Research and Resources on Indicators and Metrics   47  

–– Equipment and preventive maintenance


–– Off-the-job safety
–– Permit-to-work systems
–– Recognition, disciplinary, and reinforcement program

Security
14. In a white paper on metrics and analysis in security management, McIlravey and Ohlhausen
(2012) synthesize literature on metrics and analyses in the security management field and
describe the process of developing specific metrics, collecting and managing data, and
performing useful analyses with incident management software. The white paper cites
Campbell’s (2014) book on measures and metrics in corporate security and then provides
examples of the several hundred possible security metrics that may be relevant to a com-
pany’s cost, risk, return on investment, legal, policy, and life safety issues.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 8

Practices in Incident Reporting

Managers often use incident data to illustrate a level of safety that may exist at the airport, to
evaluate risk, and/or to assess whether certain organizational goals are being met. The risk and
insurance industries use the data as barometers of exposure and probability.

Corroborating Incident Data


Discussions with survey respondents identified a primary reason for their collecting incident
data is to be able to benchmark those data with other airports [Question 3.b.]. As noted in
Chapter 6, with few airports exactly alike in their structure, organization, operation, or goals and
objectives, it is difficult to make comparisons about incident rates or other indicators.
ACRP Report 62: Airport Apron Management and Control Programs (Ricondo & Associates,
Inc. 2012) indicates that airports have a vested interest in corroborating incident data from their
tenants, as safety practices on the ramp can impact the airport enterprise, operation, and risk
profile in several ways, including the following:
• Injuries to airline and airport personnel.
• Injuries to airline passengers and crew.
• Cost of damage to equipment, such as aircraft and ground support equipment.
• Operational impacts due to accidents and incidents, ranging from operational delays to the
costs of removing equipment from service for repair.
• Insurance considerations.
• Operational efficiency in and around the apron environment (i.e., improving aircraft turn-
around time at the gate).
The air carrier and charter industries have been collecting and refining incident data for
a number of years. In addition, they were the first to have FAA requirements to adopt SMS
(1584 Federal Register/Vol. 80, No. 8/Tuesday, January 13, 2015/Rules and Regulations). For
these reasons, they require reporting of incidents and accidents by employees and contractors
throughout their respective organizations in an effort to mitigate future consequences. OSHA
reporting requirements also apply to air, cargo, and charter operators, as well as other private
airport businesses. Sharing those data with others, including airports, is not a norm.
Obtaining incident data from private third-party entities can be difficult, as they have
expressed concerns about privacy and legal exposure for their business, their employees, and any
other parties involved, and about negative public relations. As private entities, the air carriers
are concerned with the adverse financial and operational impacts that can result from injuries,
equipment outages, and facility damage.

48

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Practices in Incident Reporting   49  

Overcome Barriers to Nondisclosure


To overcome barriers of nondisclosure, several of the airports in this synthesis identified
clauses in lease agreements, ordinances, and SMS involvement as means to be considered.
Recognizing the mutual importance of sharing data, two airport operators in this study specifi-
cally address this issue by cultivating a relationship of trust with their airline and air cargo
tenants. They use meetings and personal relationships as a means to exchange risk data and
“sell” the need to the air carriers, pointing to operating and financial efficiencies to be gained.
The survey asked what airport stakeholders or tenants collect incident-hazard-SPI-KPI data
on their airport [Question 3.g.]. The airlines, cargo operators, and fixed-base operators (FBOs)/
ground handlers were the prominent collectors of data. When asked if other organizations,
tenants, or business entities share incident-hazard-SPI-KPI data with the airport, the majority
said no [Question 6.c.]. Eight airports in the survey indicated they share data with other orga-
nizations, with federal regulatory agencies such as the FAA, OSHA, and EPA being the most
prominent [Questions 6.a. and 6.b.]. Insurers, risk managers, internal organizational depart-
ments, and employees were the next most common.

Tracking Incident Data


To make decisions affecting safety and risk in an organization, data are required. Data collected
need to be supplied to management or other responsible parties in a timely manner for effective
decisions or actions to occur. Decisions about how to collect, track, and report the data involve
discussion about what kind of an information management system will be used to manage the
data, who will collect what kind of data, how the data will be investigated and verified, and how
data will be made available to others.
The synthesis survey found that the collection of incident data occurs within any number of
departments [Question 3.a.]. Much of the collection depends on the type of data being collected.
Risk management, airfield operations, aircraft rescue and firefighting, and police and security
departments were the more prevalent departments noted to collect data. To a lesser extent,
engineering and construction, ground transportation, terminal operations, parking operations,
and finance were the next common. These data are supported by where the responsibility for the
oversight of the incident reporting processes resided. Operations, risk management, and safety
departments had prime responsibility [Question 3.d.].
ACRP Report 62 (Ricondo & Associates, Inc. 2012) found that at U.S. airports, there is no
comprehensive system-wide database to track accident and incident statistics to quantitatively
assess the safety of operations on the aprons or airports. This is because apron areas are typically
managed by the air, cargo, or charter carriers that use them. This is in contrast to European
airports where the airport often controls and has responsibility for apron areas. In the United
States, unless an accident or incident on the aprons and ramps involves airport operations, pub-
lic safety, or emergency response, the air operator does not normally report such information
to the airport.
ACRP Report 62 further found that while 45% of survey respondents would notify the
airport by telephone of an incident, only 11% had a standard practice of submitting the reports
to it. No distinction was made as to whether the reports involved an incident or an accident.
Discussion with airport operators for this synthesis supports the findings in ACRP Report 62
and found that even OSHA reportable accidents and incidents are not normally conveyed to the
airport by tenants or third-party operators [Question 6.c.].

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

50   Airport Incident Reporting Practices

Generally, federal OSHA requirements do not apply to airports, as they are governmental
entities exempt from the regulations. However, the survey and interviews found airports often
do collect OSHA information due to either state or city requirements, or because they believe it
is their mission to provide a safe workplace [Question 3.a.]. OSHA regulations contain standards
that help airports meet the safety goal. As a national standard, OSHA data allow for comparison
among airports.
ACRP Report 62 provides a comprehensive literature review of accident/incident data
systems related to airport terminal apron areas. The documents identified in ACRP Report 62
describe the same challenges and diversity of data collection among aviation regulatory and
industry organizations as does this report. The following are several of the databases referenced
in the report:
• FAA Aviation Safety Information Analysis and Sharing System.
• Integrated Management Information System, maintained by OSHA, which contains records
of OSHA investigations.
• NASA’s ASRS.
• ACI Survey on Apron Incidents and Accidents (Airports Council International 2009).

Tracking Methods
Incident reporting systems have evolved from a paper-based format, to computer-
based formats, and to web-based systems. Each can be found at today’s airports. The more
extensive the reporting of incidents, the more important it is to have a database that can
manage it well.
With incident reporting systems ranging from simple to complex, GA and non-hub
category airports are more likely to have manual incident reporting systems in use. However,
as incident reporting takes on greater importance and should SMS requirement become
regulatory, airports having complex facilities and operations will be more inclined to utilize
incident reporting software programs to manage their system. Reporting programs can range
from a handwritten recording system, to an in-house developed spreadsheet program, to a
sophisticated integrated system purchased from a vendor and based on a separate computer or
cloud-based server. The programs can produce and present trending and statistical reports,
performance metrics, risk mitigation solutions, and other types of data in numerous ways.
They can also be integrated and shared among selected locations, departments, employees, and
stakeholders.
In the synthesis survey, a simple computer spreadsheet was the most prevalent means for
consolidating and tracking incident data [Question 4.b.]. Five of the airports were using a custom
software program, while two used a commercially available program. Three airports used a
manual system in combination with other means. One airport did not use any system.
Given the large amount of data to be collected in a well-developed incident reporting and
safety management system, and especially at a medium or large hub airport, an integrated tech-
nology solution, such as a web-based reporting system, is almost paramount. At smaller airports
with less financial capabilities, a simple electronic spreadsheet would be more the norm, unless
their reporting system is tied into a larger municipal reporting system. One airport in the survey
identified the need to be able to customize any software to meet the needs of the organization
as important. Another airport identified the importance of owning the data and not having
the data be cloud-based. The reason for doing so was primarily related to issues of security and
maintaining the integrity of the data.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Practices in Incident Reporting   51  

Data Protection
Another challenge airports must face when implementing an incident reporting program
is maintaining the security and privacy of the information. Interviews with respondents to
the survey conveyed the concern for incident reporting being used negatively to cause loss of
reputation or to create a negative image of the airport, when the intent behind incident reporting
is to increase safety and reputation. Frequently cited as an impediment to data sharing are the
federal Freedom of Information Act (FOIA) and state sunshine law requirements applicable to
most public governmental entities.
For purposes of FOIA and state sunshine law requirements, incident reporting needs to be
simple, factual, straightforward, and non-speculative. The resultant investigation or analysis of
the report is what provides the level and quality of detail necessary from which an organization
can learn and make improvements in its processes and procedures. For a discussion on the
legal exposure to an airport subject to FOIA or sunshine laws, ACRP Legal Research Digest 19
(Bannard 2013) provides a good overview.
When the survey asked what precautions are taken to protect the data collected or to pro-
tect the person(s) reporting a hazard or incident, the few responses indicated redaction of
permissible data and review by the legal or human resources department as the only options
[Question 6.d.]. Most of the incident reporting data stored at airports were found to be on secure
intranet or individual departmental computers.
One survey respondent suggested the following:
Because airports can’t protect data, FAA, AAAE, or ACI should develop and sponsor a national safety
hazard/incident database similar to the wildlife hazard database. Doing so would provide another layer of
protection, allow for data sharing and enhanced identification of trends and issues nationwide instead of
simply at one airport.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 9

Case Examples

The information contained in this chapter includes a more detailed investigation into practices
of incident reporting at several of the airports that responded to the survey. The respondents
vary in job positions due to the selection method made by the airport chief executive or manager,
to whom the invitation to participate was sent. For this reason, the information conveyed
reflects the department assigned the task. Of value are the appendices that illustrate what or
how those airports incorporate incident reporting, indicators, metrics, and safety culture into
their organizations.

Large Hub: Hartsfield-Jackson Atlanta


International Airport, GA
Hartsfield-Jackson Atlanta International Airport (ATL) is operated by the city of Atlanta.
The interview was conducted with the safety management system manager.
ATL was one of the original 2007 SMS pilot study airports. Since then, it has progressively
worked toward full implementation. The last component missing is the full integration and
deployment of its incident management software program. As found at other airports where
SMS has progressed, it has a champion spearheading the effort.
Typical of other airports, incident data are collected and synthesized in several different
areas. The airport is a department of the city. The city does have an ERM in place at the
executive level, which includes the airport. The Human Resource Department of the city
has responsibility for worker safety and workers compensation, for which it has purchased
an industry standard software program. The city follows OSHA guidelines in collecting and
reporting incident data.
Airfield operation uses a separate customizable industry standard software program for inci-
dent reporting on the airfield, terminal, and landside functions. The Airport Safety Operations
Compliance System allows multiple persons to document at one time all airport inspections and
incidents, manage the Part 139 compliance process, document calls for service, issue Notices to
Airmen (NOTAMs), and store operational and activity data for the facilities.
A third industry standard software program is used by the city’s maintenance depart-
ments, including the airport, for work orders system, inventory management, and building
maintenance. Due to years of use and its valuable historical data, the software program will
be the basis for the integration of SMS data. ATL’s IT department, which owns the data, will
further customize the integration. For a detailed description of ATL’s incident reporting and
SMS integration, an article on the integration is available from the Journal of Airport Management
(Ayers 2018).

52

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Case Examples   53  

ATL has a confidential reporting system available to anyone on the airport and to the general
public. It is available by using a dedicated telephone line or on the web. The web location is three
clicks from the main page under passenger information and safety buttons (http://www.atl.com/
passenger-information/safety/#1458853848034-8688430e-51d5).
Neither the city nor the airport has a specific definition for incident. The intent is to encompass
all activity or events that may affect or pertain to operational safety. ATL meets annually with its
insurance carrier to review risk exposures, claims, and accident investigations. A three-member
panel of senior managers receives weekly incident data for review.
A scorecard is maintained for a number of indicators. For ATL, the SPIs used are (1) SMS
training; (2) confidential hazard reporting; (3) safety recognition; (4) notice of violation;
(5) landside incidents; and (6) Hazmat spills. At the city level, which includes the airport, the
SPIs used as leading indicators are (1) training; (2) safety promotion; (3) audit inspections;
(4) safety briefs/talks; (5) safety communication; and (6) type of causality. The scorecard is
updated monthly, quarterly, and annually.
As part of its culture, ATL’s SMS slogan is “Safety Always!” It is expected that every employee
is responsible for communicating any information that may affect airport operations and for
using the confidential reporting program to ensure that potential safety issues are addressed and
corrected. The expectation is conveyed through initial badge training for new employees and
then continuously through employee and public events that celebrate safety. In particular, ATL
holds a safety expo, gives employee recognition awards, and routinely sponsors and conducts
various safety training sessions. Posters are distributed to all tenants and placed throughout the
airport (see Appendix E).
Employee recognition awards are issued monthly in four categories: General Safety—Airside;
General Safety—Landside; Fire and Life Safety (Airside and Landside operations); and FOD
Removal and Management. The awards are made to an employee or team who displays excep-
tional safety awareness in the day-to-day work environment. The types of safety behaviors that
merit award consideration are those that clearly state activities of identifying hazards and any
actions taken to mitigate risks of injury or accidents, including, but not limited to, the following:
• Acting prudently to report and/or prevent a safety hazard or incident.
• Promoting a positive safety culture to increase employee awareness of hazards and workplace
safety.
• Adhering to good housekeeping standards to enhance fire safety by avoiding blocked or
obstructed aisles, exits, or fire extinguishers.
• Preventing or removing hazards such as FOD in or around the Aircraft Operation Area.

Large Hub: Dallas-Fort Worth International Airport, TX


Information on the incident reporting system and ERM was obtained from the risk man-
agement department at Dallas-Fort Worth International Airport (DFW). DFW is in the mid-
dle stages of ERM development. Its efforts toward becoming proficient at incident reporting
are in part because it is self-insured. Incident reporting will help promote and provide a better
picture of the organizational risks that need to be addressed in reducing insurance costs. The
implementation of the incident reporting system is viewed as less about risk itself than about
discovering what is happening around the airport so the risk and insurance costs can be better
managed.
DFW had participated in the initial 2007 FAA pilot study related to SMS. It has progressed
since then to a point where, if the final rule on SMS becomes regulatory in the future, it would

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

54   Airport Incident Reporting Practices

be ready. Most of the processes are in place. The one last major component, a computerized
incident reporting system, is to be implemented in the near future. The risk management depart-
ment has created an internal web portal to accept incident data to replace the current system of
individual departmental collections. The interface system is a commercial program that allows
for DFW to customize it as necessary to meet its needs. As part of its customization, DFW has
developed the following as part of its drop-down menus for selecting incidents: Airfield Safety
Event; Injury; Maintenance Request/Concern; Property or Vehicle Damage; Lost Property; and
Other. The data are retained on DFW’s own password-protected servers. The incident reporting
practices are supported by written policies from the governing board (see Appendix F).
Data collected by the system will provide information required under airport certification,
environmental protection, transportation security, and workers’ compensation regulations.
Voluntary data collection will include hazard and near-miss reporting, and any data that have
the potential for incurring insurable losses. Figure 12 describes sources of information intended
to feed into the incident reporting program.

Figure 12.   Sources of input to incident reporting process at DFW Airport. Source: Courtesy of DFW Airport.
Used with permission.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Case Examples   55  

DFW adopts OSHA reporting requirements in its stewardship of the airport. It has over
1,800 employees, with more than 60,000 tenant employees subject to federal or state OSHA
requirements. DFW does provide OSHA training for its employees and contractors.
The approach of the risk management department at DFW in implementing the incident
reporting system is that of a consultant to the other departments. The risk management
department will be the coordinator of data through the web portal and help those depart-
ments evolve their own reporting items. Risk management then generates KPI data that
are used exclusively at the executive level, while offering to help the department meet the
KPIs. Data entered into the reporting system results in the generation of an e-mail to risk
management.
Effort is being made to change the culture of the organization to not only become safer,
but to help individual departments in their efforts to achieve overall goals of the organization,
thereby enhancing safety, efficiency, and effectiveness of operation. For instance, aircraft rescue
tracks response times, and human resources tracks lost time, off workdays, attendance, and
work time.
A dashboard on incident data is provided to a cross-functional risk council that reviews
incident data. The most incidents occur in the vehicle maintenance area. Incident reporting
is mandatory through the work order system. The work order system is tied to the incident
reporting system for improved incident reporting, even down to the level of recording scratches
incurred to vehicles. A facility scorecard is maintained for the facility.
The airport’s airfield operations department uses a separate commercial incident reporting
system for compliance with 14 CFR Part 139. The intent is to later integrate the airfield reporting
into the new portal.
DFW currently has a Ramp Operation Safety Team, a Runway Safety Action Team, and a Vehicle
Operations Safety Team that review incident data from their respective areas. Incident data from
the airlines, air cargo, and other tenants on the airport are not shared directly with the airport.
However, DFW strongly believes in partnering with its business tenants. Through collaborative
engagement with its tenants, means are found to mutually advance safety and performance
through the establishment of meetings, discussion, and confidential disclosures. DFW does not
ask tenants to provide internal documents on incidents, as tenants are not inclined to provide
them and it could affect the collaborative environment.

Large Hub: Houston Airport System, TX


The Houston Airport System, which is part of a municipal government, operates three
airports. Information for the synthesis was obtained from the Safety & Emergency Division
at George Bush Intercontinental Airport (IAH). The Safety & Emergency Division houses the
SMS and wildlife hazard reporting system.
On its dashboard, the Houston Airport System maintains the following:
• Passenger injuries by airport.
• Occupational incident costs.
• Fleet incidents.
• Wildlife strikes per operation.
• Preventable versus unpreventable accidents.
• Severity of incidents.
• Year-to-date safety (total cases and 5-year incident rate for each airport plus administration).
• Industry benchmarks (air transportation services, janitorial services, office administration).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

56   Airport Incident Reporting Practices

The Houston Airport System has an ERM that is part of a larger City of Houston ERM,
which is centered in the Office of the City Controller. KPIs are city-wide rather than airport
specific, though the data can be broken down to individual departments, if desired. Key business
risk areas determined for the airport are categorized as compliance, facility management, infor-
mation technology, security, communication, financial management, inventory management,
procurement, project management, and revenue management. Within those areas, airport
management is keen to collect data to indicate their performance.

Recently installed touchpad screens outside the terminal restrooms provide an example of
newer incident reporting technology and capabilities. The pad allows travelers to send imme-
diate feedback on bathroom conditions. The system, installed outside of restroom entrances,
prompts travelers to rate their experiences with a happy or sad face. Drop-down menus allow
for selection of a variety of conditions for quick notification to terminal maintenance.

The Houston Airport System provides a number of different reporting forms as part of its
SMS. Appendix G is a sample safety policy requiring incident reporting. Appendix H is a sample
employee incident report form. Appendix I is an example of an incident reporting data entry
screen on the internal IAH computer reporting system.

Large Hub: Port of Portland, OR


The Port of Portland has oversight of the Portland International Airport (PDX), two GA air-
ports, a marine navigation and cargo system, and various transportation-related property. The
safety management functions of the Port extend across all its areas of responsibility. Interviewed
for the report were members of the risk management department, which manages general lia-
bility, claims, and worker safety. It uses a commercial risk management information system
to track necessary data. The safety department manages the SMS and safety risk management
(SRM) processes, which are in their infancy.

An example of the silo effect is evident at PDX, as the airport has two separate call centers.
The tracking of incident data occurs separately within seven different departments: (1) Environ-
mental, (2) Police, (3) Fire, (4) Communications Center, (5) Airfield Operations, (6) Property
Management, and (7) Wildlife Management. The challenge for risk management is to integrate
all seven. There currently is not an ERM in place.

PDX does have a definition for the term “incident”—an event with an adverse effect on an
asset of the organization. However, the definition is currently under review, as the use of the
words “adverse effect” is thought to convey a negative impact on the organization. As the risk
department is seeking to collect both positive and negative data, it intends to revise the definition
to include all aspects and impacts.

There is an assigned individual in the safety department who culls incident reporting data
from the communications center. Similar to Seattle, that person reviews the incident data,
categorizes the data, disseminates the data to responsible parties, follows through on resolutions,
and compiles various reports.

The Port of Portland does conduct an organizational safety culture survey. The public effort to
promote safety and incident reporting stems from the organizational motto of “see something,
say something.” The Port uses a number of means to promote hazard and incident reporting,
such as a bumper sticker placed on airport vehicles, posters, and a wallet-sized reminder card as
part of an employee’s call list.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Case Examples   57  

Large Hub: Seattle-Tacoma International Airport, WA


The Port of Seattle operates both a seaport and the airport. The Seattle-Tacoma International
Airport (SEA) has an incident reporting system that has been in place for approximately five
years. An original 2007 SMS pilot study airport, the organization recently formed the Aviation
Safety Management Division, reporting to the director of aviation operations. As noted with
other airports, a champion leads the internal SMS effort and is making strides to advance the
hazard awareness and incident reporting culture.
An established Communications Center, typical of most major airports, receives incident
data of all types. Similar to the Port of Portland, there is one safety management specialist
who is assigned the responsibility to review all the incidents, categorize and disseminate the
information to responsible parties, reach out and engage tenants, and follow up on the resolu-
tion. The process is referred to as “incident forensics and triage,” and follows the philosophy
of necessary collaboration. Collaboration is necessary because the airport is at risk for tenant
non-movement area vehicle activity and environmental spill activity related to fueling. The
Port has customer satisfaction and safety as two of its high-performance strategies to meet
overall goals.
Similar to ATL, SEA uses a commercial management software program for risk tracking and
assessment; another industry standard software program for its maintenance and inventory
needs; and a third internally developed Airfield Incident Reporting System (AIRS) for its
operations. The AIRS started as an Excel spreadsheet and evolved into an IT database system.
The near-miss reporting program is part of the risk management program, but discussion is
continuing on where best to house the hazard and incident reporting system.
The airport seeks to improve its incident reporting capabilities through collaborative training
and relationship efforts with its tenants. It subscribes to a commercial program that provides
training and resources to improve an organization’s overall safety culture.
SEA has developed a scorecard and publishes a composite safety score that includes (1) runway
incursions (separate for vehicle and pedestrian/operational); (2) surface incidents; (3) ground
incidents; (4) wildlife strikes; and (5) Part 139 discrepancies. The composite score consists of a
30%, 30%, 20%, 15%, and 5% ratio, respectively.
For incidents, SEA uses an Ishikawa fishbone analysis. It has found that, as the hazard and
incident reporting culture progresses, the number of hazards and incidents is going up.
This is to be expected as people become more comfortable and confident with a just culture
in existence. The airport also has an extensive closed-circuit television (CCTV) system for
capturing incident data, though it is used primarily after-the-fact in the review and investigation
process.

Medium Hub: Columbus Regional Airport Authority, OH


The Columbus Regional Airport Authority (CRAA) operates and manages the John Glenn
Columbus International, the Rickenbacker International, and Bolten Field airports. Informa-
tion on the survey was obtained from the Airport Operations Department. The interview was
conducted with the chief operating officer.
Approximately 10 years ago, the airport authority recognized the need for an ERM, based
on suggestions from its insurance carrier. The ERM was initially developed within the insur-
ance division, which is under the legal department. Most recently, in 2017, the authority hired
a new director of administration in the Finance Department who had experience in the risk

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

58   Airport Incident Reporting Practices

management private sector. The ERM is now being further developed under the Risk Manage-
ment Office and the responsibility of the chief financial officer.
While not part of the SMS pilot studies, the CRAA has pursued SMS implementation in
conjunction with the ERM. The CRAA is intending to have an integrated SMS for all the Port
enterprises. Currently, incident reports are entered into several separate electronic and manual
databases. These include a commercial electronic logging system for airfield conditions and
incidents, and an online safety hazard reporting system developed in conjunction with SMS
implementation. Incident reports are reviewed by one individual, who forwards any action-
able item to the department responsible for resolution. The gatekeeper monitors and closes out
reports as necessary and compiles the weekly and monthly summary reports.
Mandatory reporting is required for OSHA, 14 CFR Part 139, police and security response,
medical and emergency response, and airfield maintenance. Incident reporting systems are
currently housed in various departments with their own systems. The units gathering data are
asset management, public safety, human resources, and airport operations. The goal of CRAA
is to develop a central clearing house for incident data. The gatekeeper will be the manager of
workplace safety, who reports to the senior manager of the Emergency Preparedness and Worker
Safety Department. Voluntary incident reports are provided through a workplace suggestion
box, e-mail, website (online safety hazard reporting), written correspondence, telephone, and
social media methods.
The maintenance-reporting component of the airfield commercial logging system ties the
data to CRAA’s enterprise work order system. The maintenance department’s manual incident
reporting system requires reports of near misses, as well as injuries and damages. A strong safety
culture has employees reporting damage as minor as scrapes and dents incurred to vehicles and
equipment.
For OSHA reporting, CRAA is required to file the Ohio 300P form, which is equivalent to
the OSHA 300 form. The reporting is made to the Public Employer Risk Reduction Program
administered by the Bureau of Workers Compensation. A dashboard exists for the OSHA data
only, which are developed by the manager of workplace safety and forwarded to senior leader-
ship. A management-labor safety committee also exists to review incident data. A culture of
safety is evidenced by the existence of a balanced scorecard developed and emphasized by
senior leadership, the work of the safety committee and associated metrics (see Appendix J),
regular weekly group meetings that review and reinforce incident reports, and toolbox talks.

Small Hub: Sarasota-Bradenton Airport Authority, FL


The Sarasota Manatee Airport Authority operates and manages the Sarasota-Bradenton
International Airport (SRQ). The interview was conducted with the operations department
manager.
An ERM is in progress at SRQ. It is managed through the Human Resources Department,
as part of loss prevention and control. There currently is no enterprise-wide incident reporting
system other than the normal requirements of 14 CFR Part 139, police, and emergency response.
However, the culture at SRQ is to provide means for reporting incidents through the use of
a white courtesy phone, volunteer ambassadors, a web portal, police presence, and tenant
employees trained to direct inquiries to the proper personnel. An example of its public access to
reporting incidents was shown in Figure 7 in Chapter 5. The courtesy telephone rings into the
air communications (AIRCOM) center.
The AIRCOM center uses commercial SMS reporting software that allows for customization
of its output. SRQ has adapted the form to include laser incident and drone incident reporting.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Case Examples   59  

Also on the home page is the general AIRCOM daily log, an operations daily log, wildlife
depredation and observation log, aircraft incident/accident report, medical incident report,
supplemental incident/violation report, and several 14 CFR Part 139 reports. From the forms,
a dashboard is generated that includes the following:
• Active NOTAMs.
• Operations daily log – number of entries/activities.
• Open work orders.
• Wildlife reports and depredation log.
• Form access for medical, aircraft, violation, drone, and laser incidents.
• Weather.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

CHAPTER 10

Findings and Conclusions

Findings
Key findings from the study follow.
• Few airports have formal incident reporting systems that capture incident data not related to
regulatory compliance.
• Incident reporting, data collection, and analysis processes are not fully developed at airports,
especially voluntary reporting capabilities.
• Airports with better-developed incident reporting practices generally had a champion within
the organization supporting and shepherding the processes.
• The breadth and depth of an incident reporting system is determined by each airport, based
on resources available and commitment of management to establish a culture of safety.
• The person or department responsible for incident reporting collection, reviewing, and
analyzing varied from airport to airport.
• Safety and key performance indicators, dashboards, and scorecards are being used by only
a few airports.
• Safety and key performance indicators, leading indicators, and hazard identification are not
well understood in the majority of airports, indicating a need for training and education in
those areas.
• It is difficult to benchmark or compare safety and key performance indicators among airports,
as they reflect the individual goals of an airport, and those goals can vary widely given the type,
size, and nature of an airport.

Challenges
Key challenges found in the study and the literature for implementing an incident reporting
system follow.
• Formalizing and finding the resources to implement an incident reporting system.
• Determining what data to collect and monitor at the right time to improve safety and
performance.
• Determining the value of a safety or key performance metric, given the general lack of
adequate data.
• Developing and implementing policies and processes that result in a just and reporting culture.
• Educating and training the airport organization on understanding the value and importance
of incident reporting systems.
• Gaining worker trust and confidence that will result in increased and timely reporting of
incidents.
• Balancing the need for confidential reporting versus freedom of information disclosure.

60

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Findings and Conclusions   61  

Conclusions
This synthesis examines current practices for defining, collecting, aggregating, analyzing,
protecting, and reporting airport organizational incident information.
Based on responses from the surveyed airports and from the literature search, there is dis-
parity as to what constitutes an incident, what kind of data are collected, how incident data are
collected, how the data are analyzed, and how the data are used and communicated to others.
Incident reporting was shown to be at different stages of development within the airports
surveyed. Those airports observed to have progressed the furthest in development of incident
reporting systems were also observed to have benefited from an individual champion who
worked and promoted to advance the concepts within the organization.
The extent to which an airport’s incident reporting system had breadth and depth appears to
be related to where in the organization the program resides. Biases were found in the types of
incidents being reported at airports. The biases stem from who has responsibility for data collec-
tion; the degree of knowledge and exposure one has about the different kinds of incidents that
are considered important; and where data reporting and collection are centered in the airport
(i.e., operations, risk, finance, emergency, security, or other departments of an airport). Incident
reporting systems housed in an airport’s operations, emergency, human resources, security, or
maintenance departments tended to reflect a safety-centric approach and were often compart-
mentalized or siloed within those departments. Systems housed in the finance or risk management
departments tended to view incident reporting more holistically, with both a safety-centric and
an enterprise-centric focus.
Regulatory and mandatory reporting requirements are the most common types of incidents
collected at airports. While provisions exist for voluntary incident reporting at airports, few
of them have a formalized process to encourage, record, analyze, and respond to near-miss
incidents. Even less process exists for enterprise-wide and strategic goal delineation.
Training, education, and technology gaps exist in the reporting and documentation of
incidents. A gap exists in the knowledge that airport organizations have about how incident
reporting and enterprise risk management can be used at airports. Little training is conducted to
prepare employees to fully participate in incident reporting outside of mandatory or regulatory
reporting. The knowledge gap is reinforced by airports not having well-developed incident
reporting policies, procedures, and practices in place.
Incident reporting will take on greater importance within airport organizations as future
emphasis is placed on SMS and ERM programs. The first will come about as a result of proposed
rulemaking for certificated airports to implement a SMS. The latter will come about as the applica-
tion of OMB circular A-123 is incorporated into grant acceptance and assurance processes. Those
airports that have an ERM program in place generally have a champion within the organization
who is versed in the requirements and metrics of ERM. The same appears to hold true for a SMS.
The use of a culture survey is not a common practice at airports, but the success of an incident
reporting system relies on a just and reporting culture within an organization. Management can
help minimize the barriers perceived by employees to reporting incidents by communicating
and supporting a just and reporting culture.

Suggestions
1. Airport organizations would benefit from clearly defining any incident reporting terms to be
used, and training employees and stakeholders as to the meaning and purpose of those terms.
The development of airport specific terms related to incident reporting would also allow for
better comparison and reference among airports.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

62   Airport Incident Reporting Practices

2. Consideration needs to be given to establishing a national safety hazard/incident database


similar to the national wildlife hazard database. Doing so would provide another layer
of protection, allowing for data sharing and enhanced identification of trends and issues
nationwide instead of simply at one airport.
3. Incorporate incident data reporting into the orientation, badging, and recurrent training
processes of employees, tenants, users, contractors, and other stakeholders.
4. Incorporate ERM practices within the airport organization.

Further Research
1. The development of a guidebook to assist airports in setting up and implementing an
incident reporting system would be beneficial to the airport industry, especially for GA and
non-hub airports.
2. The development of a standardized culture survey that can be used across airports of all
sizes for benchmarking or other comparison purposes would be beneficial to the airport
industry.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

References

14 CFR 139 Code of Federal Regulations, Title 14-Aeronautics and Space, Part 139 Certification of Airports.
https://www.ecfr.gov/cgi-bin/text-idx?tpl=/ecfrbrowse/Title14/14cfr139_main_02.tpl. Accessed Dec. 9, 2017.
Airports Council International. ACI Survey on Apron Incidents/Accidents. ACI World Headquarters, Switzerland,
2009.
Advisory Circular 150/5200-37A. Introduction to Safety Management Systems (SMS) for Airports. Federal Aviation
Administration, Washington, D.C., July 14, 2016. https://www.faa.gov/regulations_policies/advisory_circulars/
index.cfm/go/document.information/documentID/1020146. Accessed Dec. 9, 2017.
American Bureau of Shipping. Guidance Notes on Safety Culture and Leading Indicators of Safety. Houston, TX.
2014.
American Petroleum Institute. Recommended Practice (RP) 754: Process Safety Indicators for the Refining and
Petrochemical Industries, 2nd ed., 2016. http://www.api.org/oil-and-natural-gas/health-and-safety/process-
safety/process-safety-standards/rp-754. Accessed Dec. 9, 2017.
Associated Builders and Contractors. ABC 2017 Safety Performance Report: Understanding the Impact of Step
Participation on Overall Safety Performance. Washington, D.C., June 10, 2017.
Ayers, S. Safety Assurance: Making Your Safety Data Work for You. Henry Stewart Publications, Journal of Airport
Management, 2018. Available at https://www.henrystewartpublications.com.
Ayres Jr., M., H. Shirazi, S. Cardoso, J. Brown, R. Speir, O. Selezneva, J. Hall, T. Puzin, J. Lafortune, F. Caparroz,
R. Ryan, and E. McCall. ACRP Report 1: Safety Management Systems for Airports, Volume 2: Guidebook.
Transportation Research Board of the National Academies, Washington, D.C., 2009. http://dx.doi.org/
10.17226/14316.
Bannard, D. Y. ACRP Legal Research Digest 19: Legal Issues Related to Developing Safety Management Systems
and Safety Risk Management at U.S. Airports. Transportation Research Board of the National Academies,
Washington, D.C., 2013. http://dx.doi.org/10.17226/22658.
Bellamy, L. J., and V. M. Sol. A Literature Review on Safety Performance Indicators Supporting the Control of Major
Hazards. Report 620089001/2012. National Institute for Public Health and the Environment. Dutch Ministry
of Health, Welfare, and Sport, Netherlands, 2012. http://www.rivm.nl/bibliotheek/rapporten/620089001.pdf.
Bridges, W. G. Get Near Misses Reported, Process Industry Incidents: Investigations Protocols, Case Histories,
Lessons Learned. In Center for Chemical Process Safety International Conference and Workshop. American
Institute of Chemical Engineers, New York, 2000, pp. 379–400.
Blackmore, G. A. Leading Performance Indicators. Presented at the International Association of Drilling
Contractors Seminar, Aberdeen, United Kingdom. June 18, 1997. Cited in Step Change to Safety 2003.
Campbell, G. Measures and Metrics in Corporate Security, 2nd ed. Elsevier Science, Waltham, MA, 2014.
CCPS Process Safety Metrics. Process Safety Leading and Lagging Indicators: You Don’t Improve What You Don’t
Measure. Center for Chemical Process Safety, American Institute of Chemical Engineers, 2011. https://www.
aiche.org/ccps/resources/tools/process-safety-metrics. Accessed Dec. 9, 2017.
Civil Aviation Authority. Airside Safety Management, CAP 642 Is. 2. The Stationery Office, Norwich, United
Kingdom. Sept. 5, 2006.
DHS. Risk Lexicon. U.S. Department of Homeland Security. Washington, D.C., 2010.
Dyreborg, J. The Causal Relation Between Lead and Lag Indicators. Safety Science, Vol. 47, 2009, pp. 474–475.
FAA-H-8083-2. Risk Management Handbook. U.S. Department of Transportation, Washington, D.C. 2009.
FAA Order 8040.4B. Safety Risk Management Policy. Federal Aviation Administration, Washington, D.C.,
May 2017. https://www.faa.gov/regulations_policies/orders_notices/index.cfm/go/document.information/
documentID/1031187. Accessed Dec. 9, 2017.
Freibott, B. Sustainable Safety Management: Incident Management as a Cornerstone for a Successful Safety
Culture. Transactions on the Built Environment, Vol. 134, 2013, p. 26.

63  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

64   Airport Incident Reporting Practices

GAO-17-63. Enterprise Risk Management: Selected Agencies’ Experiences Illustrate Good Practices in Managing
Risk. U.S. Government Accountability Office, Washington, D.C., 2016.
Global Environmental Management Initiative (GEMI). Measuring Environmental Performance: A Primer
and Survey of Metrics in Use. Global Environmental Management Initiative, Washington, D.C. 1998.
http://gemi.org/Resources/MET_101.pdf. Accessed Dec. 9, 2017.
Hart, C. Stuck on a Plateau: A Common Problem. In Accident Precursor Analysis and Management: Reducing
Technological Risk Through Diligence (J. R. Phimister, V. M. Bier, and H. C. Kunreuther, eds.), National
Academies Press, Washington, D.C., 2004, pp. 147–154.
Hale, A. Why safety performance indicators? Safety Science, Vol. 47, 2009, pp. 479−480.
Hazel, R. A., J. D. Blais, T. J. Browne, and D. M. Benzon. ACRP Report 19A: Resource Guide to Airport
Performance Indicators. Transportation Research Board of the National Academies, Washington, D.C.,
2011. http://dx.doi.org/10.17226/17645.
Herrera, I. A. Proactive Safety Performance Indicators: Resilience Engineering Perspective on Safety Management.
PhD dissertation. Norwegian University of Science and Technology, Trondheim, Norway, 2012.
Hewitt, T. Incident Reporting Systems—The Hidden Story. PhD dissertation, University of Ottawa, Ontario,
Canada, 2011. https://www.researchgate.net/publication/307818671_Incident_Reporting_Systems-
The_Hidden_Story. Accessed Dec. 9, 2017.
Hinze J., S. Thurman, and A. Wehle. Leading Indicators of Construction Safety Performance. Safety Science,
Vol. 51, 2013, pp. 23–28.
Hohnen, P., and P. Hasle. Making Work Environment Auditable—A Critical Case Study of Certified Occupational
Health and Safety Management Systems in Denmark. Safety Science, Vol. 49, No. 7, 2011, pp. 1022–1029.
Hollnagel E. Safety Management—Looking Back or Looking Forward. In Resilience Engineering Perspectives,
Volume 1: Remaining Sensitive to the Possibility of Failure (E. Hollnagel, C. P. Nemeth, and S. Dekker, eds.),
Aldershot, Ashgate, 2008, pp. 63–78.
Hollnagel, E., J. Paries, D. Woods, and J. Wreathall, (ed.). Resilience Engineering in Practice; A Guidebook. Ashgate
Publishing Company, Surrey, England, 2011.
Hrebicek, J., J. Soukopova, M. Stencl, and O. Trenz. Corporate Key Performance Indicators for Environmen-
tal Management and Reporting. Acta Universitatis Agriculturae et Silviculturae Mendelianae Brunensis,
Vol. 59, 2011. pp. 99–108.
ICAO. Safety Management Manual (SMM), 3rd ed. ICAO Doc 9859, International Civil Aviation Organization,
Montreal, Canada, 2013.
ICAO. Findings of a Safety Culture Survey, APRAST/7–WP/12, September 2015.
Infrastructure Management Group, Inc. ACRP Report 19: Developing an Airport Performance-Measurement
System. Transportation Research Board of the National Academies, Washington, D.C., 2010. http://dx.doi.org/
10.17226/14428.
Inouye, J. Practical Guide to Leading Indicators: Metrics, Case Studies & Strategies. Campbell Institute/National
Safety Council, n.d. http://www.nsc.org/CambpellInstituteandAwardDocuments/WP-PracticalGuidetoLI.pdf.
Accessed Dec. 9, 2017.
Kaspers, S., N. Karanikas, A. Roelen, S. Piric, and R. de Boer. Measuring Safety in Aviation. Review of Existing
Aviation Safety Metrics. RAAK PRO Project S10931, Amsterdam University of Applied Sciences, the
Netherlands, May 2017.
Kjellén, U. The Safety Measurement Problem Revisited. Safety Science, Vol. 47, 2009, pp. 486–489.
Landry, J. ACRP Synthesis 58: Safety Reporting Systems at Airports. Transportation Research Board of the National
Academies, Washington, D.C., 2014. http://dx.doi.org/10.17226/22353.
Lea+Elliott, Inc. ACRP Report 37A: Guidebook for Measuring Performance of Automated People Mover Systems at
Airports. Transportation Research Board of the National Academies, Washington, D.C., 2012. http://dx.doi.org/
10.17226/14606.
Lercel, D., Steckel, R., Mondello, S., Carr, E., and Patankar, M. Aviation Safety Management System Return on
Investment Study. Center for Aviation Safety Research, Parks College of Engineering, Aviation, and Technology,
St. Louis University, St. Louis, MO, 2011.
Marsh Risk Consulting. ACRP Report 74: Application of Enterprise Risk Management at Airports. Transportation
Research Board of the National Academies, Washington, D.C., 2012. http://dx.doi.org/10.17226/22744.
Maslen, S., and J. Hayes. Preventing Black Swans: Incident Reporting Systems as Collective Knowledge
Management. Journal of Risk Research, Vol. 19, No. 10, 2016: pp. 1246–1260.
Macrae, C. The Problem with Incident Reporting. BMJ Quality & Safety, Vol. 25, No. 2, 2016: pp. 71–75.
Mathis, T. L. 5 New Metrics to Transform Safety. ProAct Safety, 2009. https://proactsafety.com/articles/5-new-
metrics-to-transform-safety. Accessed Dec. 9, 2017.
McIlravey, B., and P. Ohlhausen, Metrics and Analysis in Security Management. PPM 2000 Inc., Edmonton,
Alberta Canada, 2012. http://www.ohlhausen.com/wp-content/ . . . /Metrics-and-Analysis-in-Security-
Management.pdf. Accessed Dec. 9, 2017.
Muermann, A., and U. Oktem. The Near-Miss Management of Operational Risk. The Journal of Risk Finance,
Vol. 4, No. 1, 2002, pp. 25–36.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

References  65  

Neubauer, K., D. Fleet, and M. Ayres, Jr. ACRP Report 131: A Guidebook for Safety Risk Management for Airports.
Transportation Research Board of the National Academies, Washington, D.C., 2015. http://dx.doi.org/
10.17226/22138.
OECD. Guidance on Developing Safety Performance Indicators Related to Chemical Accident Prevention,
Preparedness and Response: Guidance for Public Authorities and Communities/Public, 2nd ed. Organisation
for Economic Co-operation and Development, Environment, Health and Safety Publications, Series on
Chemical Accidents, No. 19, Paris, France, 2008.
Oktem, U. G., R. Wong, and C. Oktem. Near-Miss Management: Managing the Bottom of the Risk Pyramid.
Risk & Regulation, July 2010.
Okuyama, A., M. Sasaki, and K. Kanda. The Relationship Between Incident Reporting by Nurses and Safety
Management in Hospitals. Quality Management in Health Care, Vol. 19, No. 2, 2010.
Oliver Wyman. Guide to Airport Performance Measures. Airports Council International. Montreal, Canada.
February 2012.
OMB Circular No. A-123. Management’s Responsibility for Enterprise Risk Management and Internal Control.
Association for Federal Enterprise Risk Managers, November 7, 2016.
OSHA. Summary of Work-Related Injuries and Illnesses. Form 300A. Occupational Safety and Health
Administration. Washington, D.C., 2004. https://www.osha.gov/recordkeeping/new-osha300form1-1-04-
FormsOnly.pdf. Accessed Dec. 9, 2017.
Probst, T., and A. Estrada. Accident Under-reporting Among Employees: Testing the Moderating Influence
of Psychological Safety Climate and Supervisor Enforcement of Safety Practices. Accident Analysis and
Prevention, Vol. 42, 2010, pp. 1438–1444.
Rakich, R., C. Wells, and D. Wood. ACRP Synthesis 30: Airport Insurance Coverage and Risk Management Practices.
Transportation Research Board of the National Academies, Washington, D.C., 2011. http://dx.doi.org/
10.17226/14611.
Reason, J. Managing the Risks of Organizational Accidents. Ashgate Publishing Ltd., Hants, England, 1997.
Reiman, T., and C. Rollenhagen. Human and Organizational Biases Affecting the Management of Safety.
Reliability Engineering and System Safety, Vol. 96, 2011, pp. 1263–1274.
Reiman, T., and E. Pietikäinen. Indicators of Safety Culture—Selection and Utilization of Leading Safety
Performance Indicators. Report number: 2010:07 VTT, Technical Research Centre of Finland, March 2010.
Reiman, T., and E. Pietikäinen. Leading Indicators of System Safety—Monitoring and Driving the Organiza-
tional Safety Potential. Safety Science, Vol. 50, 2012, pp. 1993–2000.
Ricondo & Associates, Inc. ACRP Report 62: Airport Apron Management and Control Programs. Transportation
Research Board of the National Academies, Washington, D.C., 2012. http://dx.doi.org/10.17226/22794.
Sabet P. G. P., H. Aadal, M. H. M. Jamshidi, and M. H. K. G. Rad. Application of Domino Theory to Justify
and Prevent Accident Occurrence in Construction Sites. IOSR Journal of Mechanical and Civil Engineering
(IOSR-JMCE), Vol. 6, No. 2, 2013, pp. 72–76.
Safety and Standards Board. Press Release. Safety and Standards Board, United Kingdom, January 25, 2011.
http://www.rssb.co.uk/SiteCollectionDocuments/press/2011/RSSB%20RIDDOR%20Review%20Press%20
Release.pdf. Accessed Dec. 9, 2017.
Safety Management International Collaboration Group. Measuring Safety Performance Guidelines for Service
Providers. July 2013. https://www.skybrary.aero/index.php/Safety_Management_International_Collaboration_
Group_(SM_ICG). Accessed Dec. 9, 2017.
Simmons, B. Moving Your Organisation Beyond Compliance to Safety Management Performance. Baines Simmons
Limited, Surrey, United Kingdom, Nov. 2015.
Step Change in Safety. Leading Performance Indicators: Guidance for Effective Use. Step Change in Safety, 2003.
Stowell, R. All Eyes on the Horizon. The Safety & Health Practitioner, Vol. 31, No. 9, 2013, pp. 44–47.
Thatcher, T. P., ACRP Report 44: A Guidebook for the Preservation of Public-Use Airports. Transportation Research
Board of the National Academies, Washington, D.C., 2011. http://dx.doi.org/10.17226/14547.
Toellner, J. Improving Safety and Health Performance: Identifying and Measuring Leading Indicators.
Professional Safety, Vol. 46, No. 9, 2001, pp. 42–47.
Transit Rail Advisory Committee for Safety. Implementing Safety Management System Principles in Rail Transit
Agencies. 2011. https://www.transit.dot.gov/oversight-policy-areas/transit-rail-advisory-committee-safety.
Accessed Dec. 9, 2017.
UL White Paper. Using Leading and Lagging Safety Indicators to Manage Workplace Health and Safety Risk. 2013.
https://library.ul.com/?document=using-leading-and-lagging-safety-indicators-to-manage-workplace-
health-and-safety-risk&industry=ehs-sustainability. Accessed Dec. 9, 2017.
Volz, E., P. E. Gabriel, H. W. Bergendahl, A. Maity, and S. M. Hahn. Improving Safety Culture Through Incident
Reporting. International Journal of Radiation Oncology, Biology, Physics, Vol. 84, No. 3, 2012, pp. S100–S101.
http://www.redjournal.org/article/S0360-3016(12)01207-2/abstract. Accessed Dec. 9, 2017.
Yip, M., N. Essary. Enterprise Risk Management “In Action,” presented at ACI 2010 Insurance and Risk Manage-
ment Conference, San Diego, CA. January 2010.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Acronyms and Abbreviations

AC Advisory Circular
ACI Airports Council International
APM Automated People Mover
ARFF Aircraft Rescue and Fire Fighting
ASRS Aviation Safety Reporting System
CFR Code of Federal Regulations
ERM Enterprise Risk Management
FOIA Freedom of Information Act
FBO Fixed-Base Operator
FOD Foreign Object Debris
FOIA Freedom of Information Act
GA General Aviation
HIRM Hazard Incident & Risk Mitigation
HF Human Factors
ICAO International Civil Aviation Organization
ICS Incident Command System
IMS Incident Management System
IT Information Technology
KPI Key Performance Indicator
NOTAM Notice to Airmen
OECD Organisation for Economic Co-operation and Development
OMB Office of Management and Budget
OSHA Occupational Safety and Health Administration
PDCA Plan-Do-Check-Act
SA Safety Assessment
SPI Safety Performance Indicator
SRA Safety Risk Assessment
SRM Safety Risk Management
SMS Safety Management System
TEM Threat and Error Management

66

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX A

Survey Responses

A survey was distributed to the 11 airports participating in the study. The intent of the survey was to gather preliminary data from
which interviews were conducted with a select few of the respondents. Data contained in brackets [ ] are the number of answers for
the question or the information responding to the question. Some questions allowed multiple responses.

1. ANYWHERE WITHIN YOUR ORGANIZATION …


a. Is there a formal enterprise risk management (ERM) program? Yes [ 5 ] No [ 6 ]
b. Is there a Safety Management System (SMS) oversight program? Yes [ 7 ] No [ 4 ]
c. Is there a formal Hazard Incident & Risk Mitigation (HIRM),
Incident Management System (IMS), or similar program
that collects SPI, KPI, hazard or incident data? Yes [ 7 ] No [ 4 ]
If there is a formal reporting program, is it … Mandatory [ 2 ]
Both mandatory and voluntary [ 6 ]
d. Are OSHA 300/301 injury and illness reports or similar recorded and reported at either the federal, state, or local level?
Yes [ 9 ] No [ 2 ]
2. NO HARM INCIDENTS
a. Are near-misses, close calls, or similar circumstances
that result in no harm reported, recorded, or investigated? Yes [ 11 ] No [ 0 ]
b. Is the general public encouraged or able to report
incidents, hazards, or near-miss situations that do not
result in harm? Yes [ 10 ] Unsure [ 1 ]
c. Are incidents of positive observation, behavior, actions,
or occurrences collected and recorded? Yes [ 6 ] No [ 5]

67  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

68   Airport Incident Reporting Practices

3. COLLECTION
a. Which of the following functional areas and/or departments in your organization (or governing body) collect SPI, KPI,
OSHA, or other hazard-incident data:
Check any that apply DATA COLLECTED
SPI & KPI OSHA & OTHER
Executive/Manager [5] [2]
Finance/Accounting [5] [2]
Risk Manager [7] [8]
Legal Department [5] [4]
Information Technology [3] [3]
Marketing Department [2] [1]
Planning Department [6] [2]
Purchasing/Contract/Vendor Management [3] [2]
Engineering/Construction [6] [6]
Public Works Department [6] [4]
Utility Department [3] [3]
Human Resources/Personnel [4] [5]
Environmental [4] [6]
Communications/Emergency Center [4] [4]
Airfield Operations [8] [6]
Terminal Operations [6] [5]
Ground Transportation/APM/shuttle/train/bus) [6] [6]
Parking Operations [6] [4]
Facility/Building/Equipment/Vehicle/Maintenance [5] [7]
Emergency Response/ARFF [9] [7]
Police/Security [7] [7]
Industrial/Commercial Park [1] [1]

b. For what reasons/purposes does the organization collect SPI or KPI leading/lagging indications or otherwise monitor
organizational incidents?
• For compliance with regulatory requirements, including airport certification, OSHA, and state/local codes [ 3 ].
• To identify trends in accidents and incidents, whether personnel, procedural, or mechanically based [ 4 ].
• To measure performance and organizational sustainability and to benchmark against itself and other organizations [2].
• For reports and insurance purposes [ 2 ].
• To be more proactive in the identification and mitigation of hazardous trends.
• To target mitigation efforts, including enforcement of safety regulations.
• As part of ERM best practices and to identify opportunities for improvement.
• As part of our SMS implementation and to improve safety awareness & culture.
• For general safety and security.
c. Are your SPIs or KPIs (leading and lagging) common across Yes, common [ 4 ]
the whole organization, or different between functional units? No, different [ 2 ]
Both [ 4 ]
Unsure [ 1 ]
d. What different functional units have responsibility for oversight of your incident reporting processes?
Operations [ 6 ] Legal [ 2 ] Engineering & Maintenance [ 1]
Risk [ 5 ] Finance [ 2 ] Energy & Transport [ 1 ]
Safety [ 4 ] Human Resources [ 2 ] Public Safety [ 1 ]
Health & Safety [ 2 ] Executive Board [ 1 ] ARFF [ 1 ]
e. Who is able to submit hazard and incident reports?
Anyone [ 4 ] Risk [ 2] Energy & Transport [ 1 ] Public Safety [ 1 ]
Operations [4] Finance [ 2 ] Engineering & Maintenance [ 2 ]
f. What methods are used to receive reports on hazard, incident, or performance data?
Telephone [ 10 ] Verbal [ 8 ] Suggestion box [ 5 ]
Website [ 10 ] Written notice [ 8 ] Web or Phone App [ 2 ]
E-mail [ 9 ] Social media [ 6 ]

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Survey Responses   69  

g. Which airport stakeholders or tenants collect incident-hazard-SPI-KPI data on your airport? Check all
that apply DATA COLLECTED
SPI & KPI OSHA & OTHER
Airlines [ 10 ] [ 11 ]
Cargo Operator [6] [8]
Military [2] [1]
Charter Aircraft Operator [3] [4]
Corporate Aircraft Operator [3] [4]
FBO/Ground Handler/Fuel Suppliers [7] [ 10 ]
Flight School or other Specialized Operation [2] [3]
Aircraft Maintenance/Repair Organization [3] [4]
Concessionaires [6] [9]
Caterers and other Vendors [3] [6]
Ground transportation operators [4] [7]
Terminal operators [2] [3]

4. AGGREGATION, EVALUATION, AND ANALYSIS


a. Are leading indicators used for assessing the attainment of safety or performance objectives?
Yes [ 7 ] No [ 2 ] Unsure [ 2 ]
If Yes, please list what indicators or metrics your organization uses to measure the safety of your organization.

• Safety Observations and Compliance adherence.


• Safety Training Attendance.
• Completion rate of Hazard Identification Corrective Action forms.
• Wellness Participation.
• Amount of safety training.
• Amount of near-miss reporting.
• Number of runway incursions.
• Number of surface incidents.
• Incident rates.
• Number of safety audits.
• Safety committee attendance.
• Implementation rate of safety recommendations/suggestions.
• Number of safety training activities.
• 14 CFR Part 139 and other safety-related data.
b. How does the organization collect, record, and track hazards and incidents?
Check all that apply Excel spreadsheet or similar [ 8 ]
Custom software program [ 5 ]
Manual written system [ 3 ]
Off-the-shelf software program [ 2 ]
Other – [ MS Access ]
c. In reviewing incident data or reports, how is the data reviewed/assessed/ investigated?
Check all that apply By one designated individual [ 7 ]
By a small group (2–4 people) [ 8 ]
By a large group (>5 people) [ 4 ]
By outside consultant/professional [ 0 ]
d. Who is/are the individuals involved in the review or assessment of incidents?
Safety Manager [ 3 ] OPS [Operations] Certification Manager [ 2 ]
Risk Manager [ 3 ] Emergency Preparedness Manager [ 1 ]
Department Manager [ 3 ] Human Resource Manager [ 1 ]
SMS Manager [ 1 ] Safety Health Manager [ 1 ]
Vice Presidents Ramp Manager [1]
COO [ 1 ] Vehicle Accident Review Board [ 1 ]
e. How quickly are incident reports reviewed? Within 1 hour [1] Within 3 days [ 3 ]
Within 6 hours [ 2 ] Within 1 month [ 1 ]
Within 24 hours [ 7 ]

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

70   Airport Incident Reporting Practices

f. Are analyses used to investigate reported hazards or incidents, such as root cause, failure mode and effect, bowtie, fault
tree, or similar analyses? Yes [ 9 ] No [ 2 ]
g. Is a dashboard or other benchmark method used to display incident data? Yes [ 7 ] No [ 4 ]
h. Are safety or risk audits regularly performed? Yes [ 9 ] No [ 1 ] Unsure [ 1 ]

5. MANAGEMENT/ORGANIZATION CULTURE & CLIMATE


a. How is the term “incident” defined within the airport organization?
• Any mishap, behavior, error, deviation, or action that could or has caused a hazard, injury, or accident.
• An event with an adverse effect on an asset of the organization (person, property, environment, financial).
• No definition available.
• Anything occurring outside of standard.
• An occurrence which affects safety which may or may not involve damage or injury.
• An event of importance, that involves uncertainty pertaining to the safety and security of the airport, that caused, or
could have caused, injury or property damage, and that provides an opportunity for improvement in the way of
preventative measures and/or exploiting opportunities.
• An event requiring response.
• No formal definition, generally is any unusual occurrence with a potentially significant adverse impact to safety,
finances, or reputation of the Authority.
• An occurence which may lead to a hazard and/or loss.
b. Do other functional areas and/or departments define “incident”
differently than described in the previous question? Yes [ 3 ] No [ 5 ] Unsure [ 3 ]
c. How do managers, department heads, or your governing body evaluate or know where they currently stand in terms of
controlling hazards and risks?
Please describe [ see OPEN RESPONSE below ]
d. To what extent do you believe KPI-SPI-Incident-Hazard Reporting is well understood in your organization (check the one
best answer)?
Understood by most throughout the organization [3]
Understood by a few functional areas/departments [ 6 ]
Not well understood at all [2]
e. Does your organization train employees on SPI-KPI-Incident-
Hazard Reporting or similar metrics? Yes [ 5 ] No [ 5 ] Unsure [ 1 ]
f. What method of training does the airport provide for familiarization with incident reporting and SPI, KPI, lead/lag, or other
indicators [check all that apply]
On-the-job or Mentoring [8]
In-house computer-based training [3]
Seminar/Workshops [2]
Self-review of manual or similar [2]
Classroom instruction [1]
Online or web-based computer-based training [ 1 ]
OTHER - Management staff meetings, new employee orientation
g. Does your organization train employees of vendors/concessions/
contractors or require them to be trained on KPI-SPI-Incident-Hazard
reporting and similar metrics? Yes [ 3 ] No [ 8 ]
h. Does your organization perform or assess organizational surveys
on safety climate or culture? Yes [ 5 ] No [ 5 ] Unsure [ 1 ]
i. Can you give examples of management support and commitment in the development and use of a hazard and incident
management reporting system and resultant safety culture?
Please describe

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Survey Responses   71  

6. DISSEMINATION
a. Do you share your incident reporting efforts or metric results
with other stakeholders? Yes [ 8 ] No [ 3 ]
b. With whom do you report or share incident data?
[7] Internal organizational departments
[6] Federal regulatory agency (FAA, OSHA, EPA, etc.)
[6] Employee
[5] Insurers/Risk managers
[4] State regulatory agency or similar (DOT, OSH, EPA, etc.)
[3] Safety professionals
[2] Local government or policymakers
[2] Trade or specialty organizations (ACI, AAAE, RIMS, etc.)
[1] Local emergency organizations
[0] Vendors/Contractors/Concessionaires
[0] General public
[0] News media
c. Do other organizations, tenants, or business entities share
incident-hazard-SPI-KPI data with you? Yes [ 4 ] No [ 7 ]
If Yes, who are they? [ Tenants, Business stakeholders, Airlines, Business partners ]
d. What precautions do you take, if any, to protect the data collected or to protect the person(s) reporting a hazard or incident?
Redacting [ 2 ] HIPPA not shared [ 2 ] Need to know [1]
Legal & HR review [ 1 ] Secure network [ 1 ]

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX B

Terms and Definitions

The following terms and definitions are presented to illustrate the variability within the industry as to what a particular
term may mean. They are also presented as alternative selections for an airport organization to choose from that would
best meet the application at its airport.

Accident: An unplanned event or series of events that results in death, injury, or damage to, or loss of, equipment or property.
(FAA Order 8040.4B. https://www.faa.gov/documentLibrary/media/Order/FAA_Order_8040.4B.pdf).
At-risk behaviors: Any activity that is not consistent with safety programs or training. (WIT Transactions on The Built
Environment, Vol 134, p. 261. https://webcache.googleusercontent.com/search?q=cache:uHqnbwjikgoJ:https://www.witpress.com/
elibrary/wit-transactions-on-the-built-environment+&cd=1&hl=en&ct=clnk&gl=us&client=firefox-b-ab).
Audit: A systematic and documented review of the effectiveness of implementation of processes, programs, and procedures based on
general process criteria set by the organization. (The Anglo American Safety Way. http://www.angloamerican.com/~/media/Files/A/
Anglo-American-PLC-V2/documents/approach-and-policies/safety-and-health/the-anglo-american-safety-way-final.pdf).
Business continuity (BC): The capability of the organization to continue delivery of products or services at acceptable predefined
levels following a disruptive incident. (ISO 22301:2012. http://www.thebci.org/index.php/resources/what-is-business-continuity).
Business continuity management (BCM): A holistic management process that identifies potential threats to an organization and
the impacts to business operations those threats, if realized, might cause, and which provides a framework for building organizational
resilience with the capability of an effective response that safeguards the interests of its key stakeholders, reputation, brand, and
value-creating activities. (ISO 22301:2012. http://www.thebci.org/index.php/resources/what-is-business-continuity).
Climate: Employees’ perceptions of workplace events and the expectations that the organization has of workplace behaviors,
attitudes, and norms. [Ostroff, C., Kinicki, A. J., & Tamkins, M. M. 2003. Organiza tional culture and climate. In W. C. Borman, D.
R. Ilgen, & R. J. Klimoski (Eds.), Handbook of psychology. Industrial and organizational psychology, Vol. 12 (pp. 565–593).
Hoboken: Wiley].
Climate: The measurable components of the safety culture such as the management behaviors, the safety systems, and the employee
perceptions of safety. (Guldenmund F., The nature of safety culture: a review of theory and research. Saf Sci. 2000;34:215–257.)
Climate: Describes employees’ perceptions (as opposed to attitudes and beliefs) about risk and safety, providing a “snapshot” of the
current state of safety. (Mearns, K., Yule, S., 2009. The role of national culture in determining safety performance. Challenges for the
global oil and gas industry. Safety Science 47, 777–785.)
Climate: An “artifact” of the deeper cultural level and the visible behavior of its members. Artifacts include organizational
processes which render certain behaviors routine. The term “safety climate” is often heard alongside that of culture. (A literature
review on safety performance indicators supporting the control of major hazards National Institute for Public Health and the
Environment Dutch Ministry of Health, Welfare, and Sport, Netherlands RIVM Report 620089001/2012 L.J. Bellamy | V.M. Sol).
Consequence: The negative effect of an event, incident, or occurrence. (GAO-16-632 Aviation Security.
https://www.gao.gov/assets/680/677586.pdf).
Culture: The ability and willingness of the organization to understand safety, hazards and means of preventing them, as well as ability
and willingness to act safely, prevent hazards from actualizing, and promote safety. (Reiman, T., E. Pietikäinen. “Leading indicators
of system safety – Monitoring and driving the organizational safety potential.” Safety Science, Vol. 50, 2012. pp. 1993–2000).
Culture: The product of individual and group values, attitudes, perceptions, competencies, and patterns of behavior that can
determine the commitment to and the style and proficiency of an organization’s safety management system. [The Advisory Committee
on the Safety of Nuclear Installations (ACSNI). www.hse.gov.uk/humanfactors/topics/common4.pdf].

72

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Terms and Definitions   73  

Culture: The enduring value and priority placed on worker and public safety by everyone in every group at every level of an
organization. It refers to the extent to which individuals and groups will commit to personal responsibility for safety; act to preserve,
enhance, and communicate safety concerns; strive to actively learn, adapt, and modify (both individual and organizational) behavior
based on lessons learned from mistakes; and be rewarded in a manner consistent with these values. [Wiegmann, D.A.; Zhang, H.;
von Thaden, T.; Sharma, G.; Mitchell, A. Safety culture: A review. (Technical Report no. ARL-02-3/FAA-02-2). Atlantic City,
New Jersey: FAA, 2002.]
Culture: The values, beliefs, and norms that govern how people act and behave with respect to safety. (Stolzer, A.J.; Halford, C.D.;
Goglia, J.J. Implementing safety management systems in aviation. Burlington, Vermont: Ashgate, 2011).
Culture: The ability and willingness of the organization to understand safety, hazards and means of preventing them, as well as
ability and willingness to act safely, prevent hazards from actualizing, and promote safety. Safety culture refers to a dynamic and
adaptive state. It can be viewed as a multilevel phenomenon of organizational dimensions, social processes, and psychological states
of the personnel. [Reiman, T. & Oedewald, P. (2009). Evaluating safety critical organizations. Focus on the nuclear industry.
Swedish Radiation Safety Authority, Research Report 2009:12].
Culture: The set of enduring values and attitudes regarding safety issues, shared among the members of the group. It refers to the
extent to which the members of the group are positively committed to safety; consistently evaluate safety-related behavior; are
willing to communicate safety issues; are aware of the known risks and unknown hazards induced by their activities; are willing and
able to adapt themselves when facing safety issues; and are continuously behaving so as to preserve and enhance safety. [Montijn, C.
and Balk, A.D. (2010), ASC-IT Aviation Safety Culture Inquiry Tool: Development from theory to practical tool, NLR Technical
Report 2009-241, January 2010].
Enterprise risk management (ERM): A process, effected by an entity’s board of directors, management, and other personnel,
applied in a strategy setting and across the enterprise, designed to identify potential events that may affect the entity, and manage
risks to be within its risk appetite, to provide reasonable assurance regarding the achievement of entity objectives. [Committee of
Sponsoring Organizations of the Treadway Commission (COSO), 2004. https://erm.ncsu.edu/library/article/coso-erm-framework].
Enterprise risk management: A discipline that addresses the full spectrum of an organization’s risks, including challenges and
opportunities, and integrates them into an enterprise-wide, strategically aligned portfolio view. ERM contributes to improved
decision-making and supports the achievement of an organization’s mission, goals, and objectives. (OMB Circular No. A-123:
Management’s Responsibility for Enterprise Risk Management and Internal Control Association for Federal Enterprise Risk
Managers November 7, 2016. https://www.whitehouse.gov/sites/whitehouse.gov/files/omb/.../2016/m-16-17.pdf).
Enterprise risk management: A holistic approach and process to identify, prioritize, mitigate, manage, and monitor current and
emerging risks in an integrated way across the breadth of the enterprise (ACRP Report 74: Application of Enterprise Risk
Management at Airports, 2012. URL: http://www.trb.org/Publications/Blurbs/167515.aspx).
Enterprise risk management: A systematic approach to risk management across the entire organization for identifying, assessing,
deciding on responses to, and reporting on opportunities and threats that affect the achievement of its objectives. (Institute of Internal
Auditors, 2009. http://www.uvm.edu/~erm/RiskAssessmentGuide.pdf).
Enterprise risk management framework: A series of key components that collectively provide the ERM principles, concepts,
processes, terminology, and direction for the delivery of effective ERM to enable the achievement of key strategic/operational
objectives. (ACRP Report 74: Application of Enterprise Risk Management at Airports, 2012. URL:
http://www.trb.org/Publications/Blurbs/167515.aspx).
Error: A generic term to encompass all those occasions in which a planned sequence of mental or physical activities fails to achieve
its intended outcome, and when these failures cannot be attributed to the intervention of some chance agency. (Reason, J. Human
Error 1990. Cambridge University Press. Manchester, UK 1990.)
Error: Includes two types of failures. Either the plan developed by the operator is adequate, but the actions deviate from the plan; or
the actions may follow the plan, but the plan is not appropriate for achieving its desired ends. The first type of failure is considered a
slip or a lapse and is a failure in executing a plan, while the second type of failure is considered a mistake and is a failure in
formulating a plan. (Maurino, et al. Beyond Aviation Human Factors. Avebury Aviation, Hants, UK 1995).
Event: The occurrence or change of a particular set of circumstances. (ANSI-ASSE Z690.1 2011, http://www.asse.org/ansi/
asse-z690-1-2011-vocabulary-for-risk-management-national-adoption-of-iso-guide-73-2009-/)
Event: Occurrence of a particular set of circumstances. The event can be certain or uncertain. The event can be a single occurrence
or a series of occurrences. (URL: https://www.enisa.europa.eu/topics/threat-risk-management/risk-management/current-risk/risk-
management-inventory/glossary).
Fault tree analysis (FTA): Used to examine an extremely complex system involving various targets such as skills, quality,
equipment, facility, operators, finance, management, reputation, or property within the domain of operation. [Malasky, S. W. (1982).
System safety: Technology and application (2nd ed.). New York: Garland STPM Press].
Hazard: A condition, object or activity with the potential for causing damage, loss, or injury. (FAA AC 150/5200-37, 2007.
https://www.faa.gov/documentLibrary/media/.../150-5200-37/150_5200_37.pdf).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

74   Airport Incident Reporting Practices

Hazard: The potential for harm (physical or mental). In practical terms, a hazard often is associated with a condition or activity that,
if left uncontrolled, can result in an injury or illness. (OSHA. https://www.osha.gov/dte/grant_materials/fy10/sh...10/
hazard_id_facilitatorguide.pdf).
Hazard: Any existing or potential condition that can lead to injury, illness, or death to people; damage to or loss of a system,
equipment, or property; or damage to the environment. A hazard is a condition that is a prerequisite to an accident or incident.
(FAA AC 150/5200-37, 2007. https://www.faa.gov/documentLibrary/media/.../150-5200-37/150_5200_37.pdf).
Hazard: A source of potential harm or a situation with a potential to cause loss. (ACRP Report 74: Application of Enterprise Risk
Management at Airports, 2012. URL: http://www.trb.org/Publications/Blurbs/167515.aspx).
Hazard: A present condition, event, object, or circumstance that could lead to or contribute to an unplanned or undesired event such
as an accident. [Risk Management Handbook (FAA-H-8083-2) 2009.
https://www.faa.gov/regulations_policies/handbooks_manuals/.../faa-h-8083-2.pdf].
Hazard: An undesirable condition or situation that may lead to unsafe event(s) or occurrence(s). Sometimes the term “threat” (e.g.,
TEM) is used instead of “hazard.” [ICAO. Doc 9859 - Safety Management Manual (SMM). International Civil Aviation
Organization. 3nd ed, pp. 1-284. Montreal, Canada. 2013.]
Hazard: An inherent property of a substance, agent, source of energy or situation having the potential of causing undesirable
consequences. (OECD. 2008. Guidance on Developing Safety Performance Indicators Related to Chemical Accident Prevention,
Preparedness and Response: Guidance for Industry. https://www.oecd.org/env/ehs/chemical-accidents/48356891.pdf).
Hazard: A hazard is any existing or potential condition that can lead to injury, illness, or death to people; damage to or loss of a
system, equipment, or property; or damage to the environment. A hazard is a condition that might cause (is a prerequisite to) an
accident or incident. FAA, 2015.
https://www.faa.gov/regulations_policies/advisory_circulars/index.cfm/go/document.information/documentID/1026670.
Incident: An occurrence other than an accident that affects or could affect the safety of operations. (FAA Order 8040.4B.
https://www.faa.gov/documentLibrary/media/Order/FAA_Order_8040.4B.pdf).
Incident: An event of sufficient severity to be reported. No general agreement on what counts as “sufficient.” (Reason, J.T., The
Human Contribution: Unsafe Acts, Accidents and Heroic Recoveries. Ashgate. 2008).
Incident: An event that could lead to loss of, or disruption to, an organization’s operations, services, or functions. (Glossary of
Terms, The Business Continuity Institute Good Practice Guidelines 2010 Global Edition. URL: http://www.thebci.org/glossary.pdf).
Incident: Work-related events or emergencies (including accidents which give rise to injury, ill health or fatality) that have resulted
in, or have the potential to result in (i.e., a near hit), adverse consequences to people, the environment, property, reputation or a
combination of these. Significant deviations from standard operating procedures are also classed as incidents. Ongoing conditions
that have the potential to result in adverse consequences are considered to be incidents. (The Anglo American Safety Way.
http://www.angloamerican.com/~/media/Files/A/Anglo-American-PLC-V2/documents/approach-and-policies/safety-and-health/
the-anglo-american-safety-way-final.pdf).
Incident (serious): An incident involving circumstances indicating that there was a high probability of an accident. (ICAO, Annex
13. 2010, p. 21–22. https://www.icao.int/safety/airnavigation/AIG/Pages/Documents.aspx).
Incident management (IcM): A term describing the activities of an organization to identify, analyze, and correct hazards to prevent
a future re-occurrence. These incidents within a structured organization are normally dealt with by either an incident response team
(IRT), or an incident management team (IMT). Similar to an IRT or IMT is an Incident Command System (ICS). (URL:
https://en.wikipedia.org/wiki/Incident_management).
Indicator: A statistic that is used to quantify a current or past observable condition or to provide future insight into changes affecting
the condition. [Øien, K, Massaiu, S, Tinmannsvik, R.K., Størseth, F. (2010). Proceedings of the 10th International Probabilistic
Safety Assessment and Management Conference (PSAM), Seattle, USA. http://www.proceedings.com/16555.html].
Indicator: Observable measures that provide insights into a concept—safety—that is difficult to measure directly. This Guidance
includes two types of safety performance indicators: “outcome indicators” and “activities indicators.” (OECD. 2008. Guidance on
Developing Safety Performance Indicators Related to Chemical Accident Prevention, Preparedness and Response: Guidance for
Industry. https://www.oecd.org/env/ehs/chemical-accidents/48356891.pdf).
Indicator: An algorithm or formula that expresses the qualitative or quantitative relationship between two or more variables and that
serves to measure to what extent has the target been achieved. (ICAO Acceptable Level of Safety Performance ALoSP.
https://www.icao.int/SAM/Documents/2017.../Módulo%209%20-%20ALoSP_en.pdf).
Just culture: A culture that balances the need for discipline when warranted, with rewards when earned. People clearly understand
acceptable and unacceptable behaviors. There’s a sense of fairness in how business is conducted for everyone. In a Just Culture,
those in authority do not “shoot the messenger” for bringing up safety concerns. (NASA. https://sma.nasa.gov/sma-disciplines/
safety-culture).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Terms and Definitions   75  

Just culture: A culture in which front-line operators or other persons are not punished for actions, omissions or decisions taken by
them that are commensurate with their experience and training, but in which gross negligence, willful violations and destructive acts
are not tolerated, and in which personnel are encouraged to report such information for improvements of the organization’s safety
(performance). (European Commission Regulation No. 376/2014. http://www.eurocontrol.int/articles/just-culture).
Key performance indicators (KPIs): A set of measures focusing on those aspects of organizational performance that are the most
critical for the current and future success of the organization. [Parmenter, David (2007) “Key performance indicators: develo ping,
implementing, and using winning KPIs,” New Jersey: John Wiley & Sons, Inc.].
Key performance indicators (KPIs): Measures that monitor the performance of key result areas of business activities. KPIs
represent a set of measures focusing on those aspects of organizational performance that are the most critical for the success of an
organization. (A literature review on safety performance indicators supporting the control of major hazards National Institute for
Public Health and the Environment Dutch Ministry of Health, Welfare, and Sport, Netherlands RIVM Report 620089001/2012 L.J.
Bellamy | V.M. Sol).
Key performance indicators (KPIs): A metric that embeds performance targets so organizations can chart progress toward goals.
[The Data Warehousing Institute (TDWI) Deploying Dashboards and Scorecards, July 2006 Wayne W. Eckerson ©2006 1105
Media, Inc., based in Chatsworth, CA].
Lagging indicator: Metrics that measure safety events that have already occurred including those unwanted safety events you are
trying to prevent. (Safety Management International Collaboration Group SMICG. https://www.skybrary.aero/bookshelf/
books/2395.pdf).
Lagging indicators: Also known as outcome, trailing, down-stream and after-the-fact indicators. (Broadbent, D. and Arnold, I.
2011. Leading the Way Towards Optimal Safety and Health Performance: Lagging and Leading Indicator Characteristics. London:
ICMM).
Lagging indicators: Metrics that measure the extent of harm that has occurred—past performance. Reactive, tells you whether you
have achieved a desired result (or when a desired safety result has failed) and provide historical information about health and safety
performance (OECD. 2008. Guidance on Developing Safety Performance Indicators Related to Chemical Accident Prevention,
Preparedness and Response: Guidance for Industry. https://www.oecd.org/env/ehs/chemical-accidents/48356891.pdf).
Lagging indicators: Measures of a system taken after events to assess outcomes and occurrences, such as accident and injury rates,
operational incidents, and dollar costs. (Guidance Notes on Safety Culture and Leading Indicators of Safety American Bureau of
Shipping, February 2014. https://ww2.eagle.org/content/dam/...safety/leading_indicators_gn_e-feb14.pdf).
Leading indicators: Metrics that provide information on the current situation that may affect future performance (Safety
Management International Collaboration Group SMICG. https://www.skybrary.aero/bookshelf/books/2395.pdf).
Leading indicators: Conditions, events or measures that precede an undesirable event and that have some value in predicting the
arrival of the event, whether it is an accident, incident, near miss or undesirable safety state. Leading indicators are associated with
proactive activities that identify hazards and assess, eliminate, minimize and control risk. [Grabowski, M., Ayyalasomayajula, P.,
Merrick, J., Harrald, J. R., & Roberts, K. (2007). Leading indicators of safety in virtual organizations. Safety Science, 45(10),
1013-1043].
Leading indicators: Provide information about developing or changing conditions and factors that tend to influence future human
performance. Leading indicators are viewed as measures or signs of changing vulnerabilities. Effective leading indicators provide a
basis for predicting or forecasting situations in which the potential exists for a change in human performance, either for better or
worse. (Electric Power Research Institute EPRI. https://www.epri.com/#/pages/product/1003033/).
Leading indicators: Conditions, events, and sequences that precede and lead up to accidents. (National Academy of Engineering
(NAE) (2004). Accident Precursor Analysis and Management: Reducing Technological Risk Through Diligence. Washington, D.C.:
The National Academies Press.)
Leading indicators: Conditions, events or measures that precede an undesirable event, and have some value in predicting the arrival
of the event, whether it is an accident, incident, near miss, or undesirable safety state. (Toellner, J., “Improving safety and health
performance. Identifying and measuring leading indicators.” Professional Safety Vol. 46, No. 9, 2001. pp. 42–47).
Leading indicators: Safety metrics that are associated with, and precede, an undesirable/unexpected consequence such as an
operational incident, near miss or personal injury. (Human Factors in Ship Design and Operation, 16–17 November 2011, London.
http://www.rina.org.uk/hres/human%20factors%20web1.pdf).
Leading indicators: The factors that provide measures of the performance of key work processes, culture and behaviors before an
unwanted outcome occurs. (Dyreborg, J. 2009. The causal relation between lead and lag indicators. Safety Science, 47, 474-475).
Metric: A system of measurement used to quantify safety performance for outcome and/or activities indicators. (OECD. 2008.
Guidance on Developing Safety Performance Indicators Related to Chemical Accident Prevention, Preparedness and Response:
Guidance for Industry. https://www.oecd.org/env/ehs/chemical-accidents/48356891.pdf).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

76   Airport Incident Reporting Practices

Model: A representation of something else, of a phenomenon or event such an accident or of a system such as an organization.
(Revisiting the “Swiss Cheese” Model of Accidents. EEC Note No. 13/06. 2006. European Organisation for the Safety of Air
Navigation. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.80.5369&rep=rep1&type=pdf).
Model: Two types: (1) Retrospective model is the basis for explaining or understanding something, (2) Prospective model is the
basis for predicting something, including measurements of present states as an indicator of possible future states. (Herrera, I.A. 2012.
Proactive safety performance indicators. Doctoral thesis Norwegian University of Science and Technology 2012:151. Trondheim,
Norway. https://brage.bibsys.no/xmlui/handle/11250/240805).
Near hit: A near hit is any occurrence or situation which had the potential for adverse consequences to people, the environment,
property or reputation, or a combination of these. (The Anglo American Safety Way. http://www.angloamerican.com/~/media/Files/A/
Anglo-American-PLC-V2/documents/approach-and-policies/safety-and-health/the-anglo-american-safety-way-final.pdf).
Near miss: An incident in which no property was damaged and no personal injury was sustained, but where, given a slight shift in
time or position, damage or injury easily could have occurred. (URL: http://www.coopertsmith.com/nearmiss-explained).
Near miss: An opportunity to improve reliability, safety, security, health, and the environment of an operation based on an abnormal
event having the potential for a more serious consequence. [Phimister JR, Oktem U, Kleindorfer PR, Kunreuther H. Near-miss
incident management in the chemical process industry. Risk Anal. 23 (2003); 445-459.]
Near miss: An event, a sequence of events, or an observation of unusual occurrences that possesses the potential of improving a
system’s operability by reducing the risk of upsets, some of which could eventually cause serious damage. (Mürmann, A. Oktem, U.
The Near-Miss Management of Operational Risk July 23, 2002. https://riskcenter.wharton.upenn.edu/wp-content/uploads/.../02-02-
MO-published.pdf).
Near miss: An undesired event that under slightly different circumstances could have resulted in harm to people; damage to
property, equipment or environment; or loss of process. (CCPS Process Safety Metrics “You don’t improve what you don’t measure”
Center for Chemical Process Safety (CCPS) New York, p. 35. https://www.aiche.org/ccps/resources/tools/process-safety-metrics).
Near miss: A sequence of events and/or conditions that could have resulted in loss. This loss was prevented only by a fortuitous
break in the chain of events and/or conditions. The potential loss could be human injury, environmental damage, or negative business
impact (e.g., repair or replacement costs, scheduling delays, contract violations, loss of reputation). [International Maritime
Organization (IMO). http://www.vta.ee/public/MSC-MEPC.7-Circ.7_-_Guidance_On_Near-Miss_Reporting.pdf].
Near miss: One in which no property was damaged and no personal injury was sustained, but where, given a slight shift in time or
position, damage or injury easily could have occurred. [OSHA National Safety Council (NSC).
www.nsc.org/WorkplaceTrainingDocuments/Near-Miss-Reporting-Systems.pdf].
Occurrence: The term used to embrace all events which have, or could have significance in the context of aviation safety, ranging
from accidents and serious incidents, through incidents or events that must be reported, to occurrences of lesser severity which, in the
opinion of the reporter could have safety significance. (URL: http://www.skybrary.aero/index.php/Safety_Occurrence_Reporting).
Operational drifts: Deviations from the correct state when everything is running as it should, all procedures are followed and the
system behaves under the proposed overall condition. (The Role of Taxonomies for the Safety Indicators definition, Plos, V., V.
NEMEC, S. SZABO, Proceedings of the 20th World Multi-Conference on Systemics, Cybernetics and Informatics. WMSCI 2016.
http://www.iiis.org/CDs2016/CD2016Summer/papers/RA563CO.pdf).
Operational risk: The risk of loss resulting from inadequate or failed internal processes, people and systems, or from external
events. The definition includes legal risk, which is the risk of loss resulting from failure to comply with laws as well as prudent
ethical standards and contractual obligations. It also includes the exposure to litigation from all aspects of an institution’s activities.
The definition does not include strategic or reputational risks. (Supervisory Guidance on Operational Risk Advanced Measurement
Approaches for Regulatory Capital, July 2, 2003. https://www.fdic.gov/regulations/laws/publiccomments/basel/oprisk.pdf).
Operational risk: The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events
[Basel Committee on Banking Supervision (BCBS), “The New Basel Capital Accord” 2001. The Near-Miss Management of
Operational Risk. https://www.bis.org/publ/bcbs196.pdf].
Precursor: Any event or group of events that must occur for an accident to occur in a given scenario. They are conditions, events,
and sequences that precede and lead up to accidents. (The National Academy of Engineering.
https://www.nap.edu/read/11061/chapter/12).
Precursor: An event or situation that, if a small set of behaviors or conditions had been slightly different, would have led to a
consequential adverse event. (National Academies of Sciences. https://www.nap.edu/read/11061/chapter/6).
Precursor: An anomaly that signals the potential for more severe consequences that may occur in the future, due to causes that are
discernible from its occurrence today. (NASA Accident Precursor Analysis Handbook. http://www.islinc.com/wp-
content/uploads/2016/03/NASA_SP-2011-3423.pdf)

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Terms and Definitions   77  

Precursors: Events, conditions, circumstances or factors that precede a desired or undesired outcome, and to which it is linked through
a causal chain. Also known as antecedents. (Overview of leading indicators for occupational health and safety in mining, International
Council on Mining & Metals, November 2012. https://www.icmm.com/website/publications/pdfs/health-and-safety/4800.pdf).
Resilience: The ability of individuals, teams and organizations to continually recognize, adapt to and absorb variations, disturbances,
disruptions and surprises in order to maintain safe functioning. [Hollnagel E, Woods DD and Leveson N (eds.) Resilience
engineering. Concepts and precepts. Aldershot, Hants: Ashgate Publishing; 2006].
Resilience: The intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it
can sustain required operations under both expected and unexpected conditions. (Hollnagel et al., 2011.)
Resilience: The capacity of an organization to accommodate failures and disturbances without producing serious accidents.
(R. Rosness, G.Guttormsen, T. Steiro, R.K. Tinmannsvik and I.A. Herrera, Organisational accidents and resilient organisations:
Five perspectives, SINTEF Industrial Management, Trondheim, 2004.)
Resilience engineering: Addresses socio-technical systems ability and capability to adjust and to continue operations in presence of
continuous disturbances. (Herrera, I.A. 2012. Proactive safety performance indicators. Doctoral thesis Norwegian University of
Science and Technology 2012:151. Trondheim, Norway. https://brage.bibsys.no/xmlui/handle/11250/240805).
Risk: The chance of loss or injury measured in terms of severity and probability. (FAA AC 150/5200-37, 2007.
https://www.faa.gov/documentLibrary/media/.../150-5200-37/150_5200_37.pdf)
Risk: The effect of uncertainty on objectives with the potential for either a negative outcome or a positive outcome or opportunity.
(GAO-17-63, Enterprise Risk Management: Selected Agencies’ Experiences Illustrate Good Practices in Managing Risk.
https://www.gao.gov/products/GAO-17-63).
Risk: The effect of uncertainty on objectives. It is typically addressed within functional, programmatic, or organizational silos.
(OMB Circular No. A-123: Management’s Responsibility for Enterprise Risk Management and Internal Control Association for
Federal Enterprise Risk Managers November 7, 2016. https://www.whitehouse.gov/sites/whitehouse.gov/files/omb/.../2016/
m-16-17.pdf).
Risk: The effect of uncertainty on objectives with the potential for either a negative outcome or a positive outcome or opportunity.
(GAO-17-63, Enterprise Risk Management: Selected Agencies’ Experiences Illustrate Good Practices in Managing Risk.
https://www.gao.gov/products/GAO-17-63).
Risk: Risks are uncertain future events that may influence an organization’s ability to achieve its objectives. The term “risk” can be
used in three distinct applications:
• Risk as exposure: The most common definition of the term. Most people refer to potential negative events such as financial loss,
fraud, lawsuits, or threats to meeting objectives as “risks.” In this context, risk management means reducing the probability of a
negative event without incurring excessive costs.
• Risk as uncertainty: The distribution of all possible outcomes, both positive and negative. In this context, risk management seeks
to reduce the variance between anticipated outcomes and actual results.
• Risk as opportunity: This is implicit in the concept that a relationship exists between risk and return. The greater the risk, the
greater the potential return, and, necessarily, the greater the potential for loss. In this context, managing risk means using
techniques to maximize the upside of uncertainty within the constraints of a current operating environment. (ACRP Report 74:
Application of Enterprise Risk Management at Airports, 2012. URL: http://www.trb.org/Publications/Blurbs/167515.aspx).
Risk: The potential for an unwanted outcome resulting from an incident, event, or occurrence, as determined by its likelihood and
the associated consequences. Extended Definition: Potential for an adverse outcome assessed as a function of threats, vulnerabilities
and consequences associated with an incident, event or occurrence. (U.S. Department of Homeland Security.
https://www.dhs.gov/xlibrary/assets/dhs_risk_lexicon.pdf).
Risk management: A continuing process to identify, analyze, evaluate, and treat loss exposures and monitor risk control and
financial resources to mitigate the adverse effects of loss. (URL: http://www.marquette.edu/riskunit/riskmanagement/whatis.shtml).
Risk Management: The identification, analysis, assessment, control, and avoidance, minimization, or elimination of unacceptable
risks. An organization may use risk assumption, risk avoidance, risk retention, risk transfer, or any other strategy (or combination of
strategies) in proper management of future events. (URL: www.businessdictionary.com/definition/risk-management.html).
Safety: Freedom from those conditions that can cause death, injury, occupational illness, damage to or loss of equipment or property,
or damage to the environment. In a risk-informed context, safety is an overall mission and program condition that provides sufficient
assurance that accidents will not result from the mission execution or program implementation, or, if they occur, their consequences
will be mitigated. This assurance is established by means of the satisfaction of a combination of deterministic criteria and risk
criteria. The term “safety” broadly includes human safety (public and workforce), environmental safety, and asset safety. (NASA.
https://nodis3.gsfc.nasa.gov/npg_img/N..._/N_PR_8715_0007__AppendixA.pdf).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

78   Airport Incident Reporting Practices

Safety: The state in which the risk of harm to persons or property damage is acceptable. (FAA Order 8040.4B.
https://www.faa.gov/documentLibrary/media/Order/FAA_Order_8040.4B.pdf).
Safety: The system property or quality that is necessary and sufficient to ensure that the number of events that could be harmful to
workers, the public, or the environment is acceptably low. [Besnard, D. and Hollnagel. E., I want to believe: some myths about the
management of industrial safety. Cognition, Technology and Work, Springer Verlag, 2014, 16 (1), pp.13 –23.
https://www.researchgate.net/publication/257480355_I_want_to_believe_Some_myths_about_the_management_of_industrial_safety].
Safety: A learning and adjustment process whereby the process safety indicator metrics provide feedback for controlling actions that
ensure the technical systems remains within the safe envelope of the design. (Bellamy, L.J., Sol, V.M., A literature review on safety
performance indicators supporting the control of major hazards, RIVM Report 620089001/2012, National Institute for Public Health
and the Environment Dutch Ministry of Health, Welfare, and Sport, Netherlands. URL:
www.rivm.nl/bibliotheek/rapporten/620089001.pdf).
Safety: The state in which the possibility of damage is reduced and maintained below an acceptable level through a continuous
process of hazard identification and safety risk management. [ICAO. “Doc 9859 - Safety Management Manual (SMM).”
International Civil Aviation Organization. 3nd ed, pp. 1–284. Montreal, Canada. 2013.]
Safety: The freedom of unacceptable risk, were risk is a combination of the probability of occurrence of harm and the severity of the
harm. (ISO, 1999. Safety aspects – guidelines for their inclusion in standards, ISO/IEC guide 51:2014, International Organisation for
Standardisation, Geneva, Switzerland. https://www.iso.org/standard/53940.html).
Safety assessment: A multidiscipline review and documentation often conducted by a panel of experts, of a preliminary safety
analysis of a system or proposed system change. [FAA Order 5200.11 CHG 3, 2014. FAA Airports (ARP) Safety Management
System, Appendix A. https://www.faa.gov/documentLibrary/media/Order/order5200_11Chg3.pdf].
Safety assurance: Processes within the SMS that function systematically to ensure the performance and effectiveness of safety risk
controls and that the organization meets or exceeds its safety objectives through the collection, analysis, and assessment of
information. (FAA Order 8040.4B. https://www.faa.gov/documentLibrary/media/Order/FAA_Order_8040.4B.pdf).
Safety barrier: An obstacle, an obstruction, or a hindrance that may either (1) prevent an event from taking place, or (2) thwart or
lessen the impact of the consequences if it happens nonetheless. (Hollnagel E., Barriers and accident prevention. Ashgate, Aldershot,
England, 2004. p. 68)
Safety barrier: An administrative or technical constraint at operator level which will prevent an inappropriate human action, or
absorb the effect of such an action, thus making the system “error tolerant” or forgiving. We use the terms “weak” and “strong” for
safety barriers, where we claim that administrative barriers generally are weaker than technical barriers. (International Journal for
Quality in Health Care, Volume 17, Number 1: pp–9, 2005.). 1
Safety indicators: The precursors for hazards and risks based on routine monitoring of operational processes according to the latest
trends in building the SMS. [ICAO. “Doc 9859 - Safety Management Manual (SMM).” International Civil Aviation Organization.
3nd ed., pp. 1–284. Montreal, Canada. 2013.]
Safety indicators: An observable characteristic of an operational unit, presumed to bear a positive correlation with the safety of the
system. (Herrera, I.A. 2012. Proactive safety performance indicators. Doctoral thesis Norwegian University of Science and
Technology 2012:151. Trondheim, Norway. https://brage.bibsys.no/xmlui/handle/11250/240805).
Safety indicators: The measurable process variables that can be used to describe the larger phenomenon or part of reality. (Plos, V.,
Methodology for Risk-based Indicators Implementation, 2016. https://ojs.cvut.cz/ojs/index.php/mad/article/download/3586/3513).
Safety performance: A state or a service provider’s safety achievement as defined by its safety performance targets and safety
performance indicators. (ICAO Annex 19, 2013. https://www.casa.gov.au/file/157236/download?token=uZzL-kPo ).
Safety performance indicator (SPI): Any measurable parameter used to point out how well any activity related to safety is
performing over time, and to assess the overall SMS health indirectly. (ACRP Report 1, Vol. 2. 2007.
http://www.trb.org/Publications/Blurbs/162491.aspx).
Safety performance indicator (SPI): A data-based safety parameter used for monitoring and assessing performance. (ICAO Annex
19, 2013. https://www.casa.gov.au/file/157236/download?token=uZzL-kPo).
Safety performance measurement: The process of measuring and monitoring safety-related outcomes associated with a given
operational system or organization. (ICAO: https://www.icao.int/APAC/Meetings/.../07%20-%20SIN_SPM%20Presentation.pdf).
Safety performance measures: Indicators that focus on the differences between actual safety performance and what has been
defined as acceptable, i.e., measuring the gap. (Janicak, C. 2003. Safety Metrics: Tools and Techniques for Measuring Safety
Performance. Lanham: Government Institutes. In Overview of leading indicators for occupational health and safety in mining,
International Council on Mining & Metals, November 2012).
Safety performance target: A measurable goal used to verify the predicted residual safety risk of a hazard’s effect. (FAA Order
8040.4B. https://www.faa.gov/documentLibrary/media/Order/FAA_Order_8040.4B.pdf).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Terms and Definitions   79  

Safety performance target: The planned or intended objective for safety performance indicator(s) over a given period. [Doc 9859
- Safety Management Manual (SMM). International Civil Aviation Organization. 3nd ed, pp. 1–284. Montreal, Canada. 2013].
Safety risk assessment (SRA): Assessment of a system or component, often by a panel of system subject matter experts (SMEs)
and stakeholders, to compare an achieved risk level with the tolerable risk level. (ACRP Synthesis 71: Airport Safety Risk
Management Panel Activities and Outcomes. http://www.trb.org/Publications/Blurbs/174359.aspx).
Safety reporting: The filing of reports and collection of information on actual or potential safety deficiencies.
(http://www.skybrary.aero/index.php/Safety_Occurrence_Reporting).
System: An integrated set of constituent elements that are combined in an operational or support environment to accomplish a
defined objective. These elements include people, hardware, software, firmware, information, procedures, facilities, services, and
other support facets. (FAA Order 8040.4B. https://www.faa.gov/documentLibrary/media/Order/FAA_Order_8040.4B.pdf).
System safety: Mechanisms aiming to control the risks that may affect the safety of the stakeholders while ensuring compliance
with relevant legislation. (J. Santos-Reyes and A.N. Beard. “Assessing safety management systems.” Journal of Loss Prevention in
the Process Industries. Vol. 15, Issue 2, pp. 77–95. 2002.)
System safety: The state or objective of striving to sustainably ensure accident prevention through actions on multiple safety levers,
be they technical, organizational, or regulatory. (System safety principles: A multidisciplinary engineering perspective J.H. Saleh 284
et al. / Journal of Loss Prevention in the Process Industries 29 (2014) 283–294).
Triggers: The requirements, precursors, or organizational plans that lead to initiation of the SRA process. (ACRP Synthesis 71:
Airport Safety Risk Management Panel Activities and Outcomes. http://www.trb.org/Publications/Blurbs/174359.aspx).

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX C

Typical Accident and


Incident Statistical Rates

(Source: Neubauer, K., D. Fleet, and M. Ayres, Jr. ACRP Report 131: A Guidebook for Safety Risk Management for Airports, Transportation Research Board of the
National Academies, Washington, D.C., 2015. http://dx.doi.org/10.17226/22138. Used with permission.)

80

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Typical Accident and Incident Statistical Rates   81  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX D

Example of Safety Indicators


and Metrics for 14 CFR Part 139

PLAN FOR GAP ANALYSES AND IDENTIFYING SAFETY INDICATORS AND METRICS

A GAP analysis is a means by which an airport can assess its level of performance in implementing a Safety Management System
(SMS). A GAP evaluation attempts to determine the variation that exists between expected performance and actual performance.
GAP analyses primarily address the SMS pillar of “Safety Risk Management.” In the model of human error developed by James
Reason, GAPs are the holes that exist in the layering of safety actions that are intended to prevent accidents or injury from occurring.
Holes or GAPs allow for an alignment of circumstances, factors, or events that result in accidents or injuries or otherwise
compromise safety actions.
To perform a GAP analysis, one must first know what kind of incidents, accidents, or errors can exist or be made in an organization
and what risks and hazards exist in an airport’s everyday operation. Knowing the kind of events, circumstances, factors, accidents,
and injuries allows for the implementation of safety measures to close the holes (or GAPs) in the system. However, most airports
organizations do not fully understand what holes or GAPs exist because they lack safety performance criteria other than “no
accidents or injuries,” and they are not fully aware of the risks and hazards associated with their operations.
How safe an airport organization is depends in part on measuring the organization’s safety performance, and in identifying and
managing the risks and hazards involved. The airport organization needs to identify what level of risk and safety performance it is
going to establish for its airport. From assessment or determination of expected organizational safety performance, a GAP analysis
will help identify where additional safety measures may need to be implemented or where existing safety measures may be reduced
without affecting overall safety. To that extent, an airport organization must ask: “How do we know we are conducting a safe
operation?”
To help answer that question, one can ask: “What are some INDICATORS of whether we operate in a safe manner?” By choosing
proper safety indicators, one can better determine what standards of performance the organization will attempt to obtain and how to
assess that performance. Safety indicators can be either quantitative (i.e., number, amount) or qualitative (i.e., follows procedures,
demonstrates competency or knowledge) in nature. The more they are described in measurable terms, the better to determine if safety
goals are being met.
The following are suggested safety indicators that may apply to each section of 14 CFR Part 139, Certification of Airports.
Please review the indicators and metrics and determine if they are a good indicator or a good measure of safety for your organization.
Cross out those that you do not think are good indicators. Add indicators or refine those listed as you go through the list.
Continuously ask yourself: How can I demonstrate or prove to someone else that our ___ (insert title of each Part 139 section
heading) ___ is safe, the risk is acceptable, or the operation is in compliance with the regulations? Then choose the appropriate
indicators.
SAFETY INDICATORS and METRICS:
(1) Pavement:
Number of closures per year
(by time? by % available? break down into maint/snow/accident)
Number of inspection deviations
(# of times edges exceed 1-2-3 inches?)
(# holes discovered ! within centerline?)
(# of aircraft deviations or pilot/air traffic control tower (ATCT) reports)
(# of times sweep runway for loose aggregate)
Number of runway cleanings
Length of time equipment is used on the pavement

82

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Example of Safety Indicators and Metrics for 14 CFR Part 139   83  

Amount of money spent


Number of work hours spent
Square footage of ponded water
Linear footage of cracks greater than 1/2 inch wide
Amount of crack sealer used
Percent of grooves filled with dirt/rubber
Percentage of vegetation growth
Length of time between discrepancy report and corrected
Friction and braking action measurements
Runway Surface Condition Sensing instrument operation
Pavement maintenance management procedures followed
Amount of hot/cold patch or other repair material used
Amount of FOD picked up
Qualifications of inspection personnel
Degree to which inspection personnel feel trained and qualified
Extent to which inspection personnel do not recognize a problem
Qualifications of maintenance personnel
Degree to which maintenance personnel feel trained and qualified
Extent to which maintenance is repeated on a problem area
Extent to which maintenance personnel do not recognize a problem
(2) Safety Areas
Number of aircraft that have gone off into the safety areas
(# of aircraft deviations)
(# or amount of damage incurred? break down into RESA/Rwy/Txy)
Number of inspection deviations or write ups
(# of times grading, filling or rut repair?)
(% of time water makes the safety area unusable?)
(# of times ARFF or Snow vehicles use safety area?)
(# of broken signs, lights or couplings)
(# of objects in the safety areas?)
Number of NOTAMs issued
Percent of time not available for use (construction or other)
Numbers of person hours spent maintaining
Number or amount of time spent inspecting
Amount of FOD collected
Length of time between discrepancy report and corrected
Number or amount of wildlife attracted to safety area
Degree to which frangibility exists
Qualifications of inspection personnel
Degree to which inspection personnel feel trained and qualified
Extent to which inspection personnel do not recognize a problem
Qualifications of maintenance personnel
Degree to which maintenance personnel feel trained and qualified
Extent to which maintenance is repeated on a problem area
Extent to which maintenance personnel do not recognize a problem
(3) Marking
Number or amount of time spent marking/painting
Degree to which markings meet dimensional criteria
Number of NOTAMs issued
Length of time between discrepancy report and corrected
Level of reflectivity in glass beads (readings/coverage)
Thickness of paint
Percentage or amount of peeling/lifting of paint
Amount of low visibility conditions
Number of gallons of paint used
Number of pounds of glass bead used
Degree of eradication of old markings
Length of time between painting
Amount/percentage of marking obscuration (water ponding, rubber buildup, etc.)
Number of runway incursions
Number of critical area penetrations
Number of pilot/user complaints and/or inquiries

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

84   Airport Incident Reporting Practices

Number of ramp/apron incidents related to markings


Amount of hazardous areas that are marked
Qualifications of inspection personnel
Degree to which inspection personnel feel trained and qualified
Qualifications of maintenance personnel
Degree to which maintenance personnel feel trained and qualified
Extent to which maintenance is repeated on a problem area
Extent to which maintenance personnel do not recognize a problem
(4) Signage
Number or amount of time spent correcting signs
Degree to which signs meet standards
Number of NOTAMs issued
Length of time between discrepancy report and corrected
Degree of reflectivity (readings)
Number of delaminated, faded or crooked signs
Amount/percentage of signage obscuration (bird guano, fading, weeds)
Number of runway incursions
Number of critical area penetrations
Number of pilot/user complaints and/or inquiries
Percentage of time lighted
Number of bulbs burned out
Amount of wildlife nesting/infestation
Qualifications of inspection personnel
Degree to which inspection personnel feel trained and qualified
Qualifications of maintenance personnel
Degree to which maintenance personnel feel trained and qualified
Extent to which maintenance is repeated on a problem area
Extent to which maintenance personnel do not recognize a problem
(5) Lighting
Number or amount of time spent replacing bulbs
Number or amount of time spent aligning lights (alignment)
Degree to which lights meet luminosity criteria (readings)
Number of NOTAMs issued
Length of time between discrepancy report and corrected
Amount of low visibility conditions
Number of person-hours spent cleaning/repairing
Amount/percentage of light obscuration (dirt, grass)
Number of runway incursions
Number of critical area penetrations
Number of pilot/user complaints and/or inquiries
Number of ramp/apron incidents related to lights
Qualifications of inspection personnel
Degree to which inspection personnel feel trained and qualified
Qualifications of maintenance personnel
Degree to which maintenance personnel feel trained and qualified
Extent to which maintenance is repeated on a problem area
Extent to which maintenance personnel do not recognize a problem
(6) Snow and Ice Control
Length of time spent removing snow (promptness)
Length of time to clear Priority 1 areas (clearance time)
Length of time to respond to snow call (trigger response)
Degree (percentage) to which pavement surfaces are cleared and dry
Runway
Taxiway
Apron
Number and type of NOTAMs issued
Length of time between discrepancy report and corrected or response
Number and condition of braking action
Number of aircraft complaints on braking action
Number of pilot/user complaints and/or inquiries on snow conditions
Amount of low visibility conditions
Length of time not able to conduct winter operations due to the weather

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Example of Safety Indicators and Metrics for 14 CFR Part 139   85  

Degree to which ice adheres to pavement


Number of gallons/pounds of deicing fluid/pellets
Square footage area covered
Number of passes or applications
Amount of time snow removal equipment inoperative or not available
Number of equipment breakages or breakdown
Amount of blades or broom material used
Degree to which signs/lights markings are visible
Amount/percentage of obscuration
Number of lights broken during removal operations
Number of signs broken during removal operations
Percent of signs/lights obscured by ice/snow or drifts
Amount of time NAVAIDs available/not available
Number of ramp/apron accidents or incidents related to winter operations
Number of aircraft/vehicle accidents and/or incidents
Amount of time taken to notify air carriers/tenants of conditions
Degree to which employees are fatigued
Amount of rest time or continuous duty
Amount of delay incurred by aircraft (terminal delay, taxi delay, gate delay)
Compliance with the snow plan
Sand and/or deice material available
Validation of material to specifications
Availability of Material Safety Data Sheet (MSDS)
Equipment in ready condition
Age of equipment
Validation of friction measurement devices
Adequate staff
Pre- and post-season review of plan
Snow plan procedures followed
Amount of storage space available
Amount of snow disposal area available
Height of snow banks
Distance pilots/vehicle operators can see intersections
Presence of snow drifts/snow fences
Contractor availability and requirements
Number of personnel available (not available) for snow removal
Qualifications of personnel to operate certain type of equipment
Qualifications of inspection personnel
Qualifications of maintenance personnel
How well did Snow Control Center function?
Number of ATCT miscommunications/radio breakdown
Degree of satisfaction from tenants/FAA
Number of runway incursions
Number of critical area penetrations
Availability and accuracy of weather forecasting
Weather reporting equipment available
Compliance with NPDES [National Pollutant Discharge Elimination System] permit or other environmental requirements
Runway/taxiway
Maintenance area
Ramp/apron area
Runoff/recycled amount
Percentage of different snow and ice conditions (dry/heavy/ice/sleet/etc.)
Amount of FOD picked up during/after winter operations
Amount of frost heave, pavement scaling or cracking, flaking of markings

(7) Aircraft Rescue and Firefighting: Equipment and Agents


Equipment in ready condition
Age of equipment
Amount of service time the equipment is available
Amount of maintenance man-hours performed on vehicles/equipment
Number of equipment breakdowns
Number of repair-hours spent due to breakdowns
Length of time equipment out of service for preventive and/or breakdown

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

86   Airport Incident Reporting Practices

Capability of communications equipment


Amount of repair on communications equipment
Ability to transmit/monitor various frequencies
Condition of personal protective equipment
Condition of ARFF accessories (hoses, nozzles, breathing apparatus)
Vehicle rotating beacon and lights operational
Amount of fuel in the tanks
Tires properly inflated and in good condition
Amount of wiper washer fluid
Proper levels of oil
Amount of agent available
Validation of agents
Validation of discharge rates
Usage rates
Amount stockpiled
Amount of time to refill
(8) Aircraft Rescue and Firefighting: Operational
Number and type of emergencies responded to
Number of alerts responded to (successfully)
Average time of response to alert
Number of practice alerts conducted
Number of times index reduced
Procedures for recall of personnel and equipment
Adequate number of ARFF personnel
Qualifications of ARFF personnel
Degree to which ARFF personnel feel trained and qualified
Records available
Physical condition of firefighter/rescuer/technician
Knowledge and ability of airfield, equipment, firefighting operations, etc.
Ability to or personnel to find way on airfield at day/night
Knowledge of type of aircraft using airport
Knowledge of personnel safety requirements and operations
Ability to use equipment (fire hoses, nozzles, turrets, other appliances)
Ability to use breathing apparatus
Ability to apply agent
Ability to use structural equipment and appliances
Ability to perform emergency plan duties
Ability to perform emergency evacuation
Understanding of HAZMAT and cargo hazards
Experience with live fire
Ability to work as a team
Training and experience in basic emergency medical services
Monitoring of frequencies when ATCT closed
Mean time between failure of alarm system
Mean time between failure of backup alarm system
Number of times ARFF responds to alarms
Number of hours emergency access road or gates not available
Number of times mutual aid called to inform of change to airport procedures

(9) Handling and Storage of Hazardous Material: Cargo


Number of times North American Emergency Response Guidebook accessed
Ability of personnel to look up hazardous materials/identify dangerous goods
Certificates on file of those authorized to handle HAZMAT
Observation of those certified to handle HAZMAT
Procedures in place for special handling
Number of responses to spills or releases
Amount of spills and/or releases
Number of placards/identification of storage areas of HAZMAT
Reports by fire marshal that meet standards
Number of observations made of HAZMAT handling
Number of accidents/incidents resulting from HAZMAT
Degree to which access to HAZMAT limited
Number of discrepancies found during inspection or surveillance
Length of time to correct discrepancy
Documentation of notification of discrepancy or noncompliance

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Example of Safety Indicators and Metrics for 14 CFR Part 139   87  

(10) Handling and Storage of Hazardous Material: Fuel


Certificates on file of those authorized to receive/dispense fuel/oils/glycol
Number of observations of those certified to handle fuel or hazardous substances
Currency of training records
Degree to which procedures followed for special handling
Number of responses to spills or releases
Amount of spills and/or releases
Number of times contamination found in fuel
Number of times monitoring alarm in fuel storage goes off
Number of placards/identification of storage areas of fuel vehicles
Reports by fire marshal that meet standards
Degree of compliance with NFPA [National Fire Protection Association] or local fire standards
Number of observations of fueling operations
Number of accidents/incidents resulting from fueling exposure
Degree to which access to fueling areas limited
Number of discrepancies found during inspection or surveillance
Length of time to correct discrepancy
Documentation of notification of discrepancy or noncompliance
Number of notifications made to FAA due to non-compliance
(11) Traffic and Wind Direction Indicators
Number of times lights found inoperative
Amount of maintenance performed
Number of times indicator checked for freedom of travel
Number of complaints received from pilots
Amount of time indicators blocked by snow
Number of times segmented circle viewed from the air
Frequency that segmented circle painted or condition inspected
(12) Airport Emergency Plan
Safety indicators have not been determined for this section yet
Recommend airport personnel brainstorm this section
(13) Self-Inspection
Number of discrepancies found
Length of time for a discrepancy to be corrected
Number of special inspections
Number of incidents/accidents resulting from missed discrepancy
Number of times inspection vehicle not available
Length of time to report discrepancy to tenants or to issue NOTAM
Frequency that notification equipment is not available or inoperative
Quality and thoroughness of inspection
Amount of training an individual has acquired
Length of time to complete a full inspection
Currency of training records
Number of inspections performed by each qualified individual
Quality of training program
Number of different types of NOTAMs (local, distant, FDC, military)
Extent to which NOTAM procedures are followed
Quality of interaction with ATCT
Quality of radio communication
Missed or misunderstood communication
Repeated instructions

(14) Pedestrian and Ground Vehicles


Extent to which ground vehicle operations procedures are followed
Number of runway incursions
Quality of radio communications
Number of times an escort is necessary
Quality of training program
Number of times gates left open
Amount of times gates inoperative
Incidents of individuals gaining access to the movement/non-movement areas
Number of vehicle-aircraft incidents or deviation
Quality of signage/marking/and lighting
Currency of training records
Knowledge of those who have been trained or authorized

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

88   Airport Incident Reporting Practices

(15) Obstructions
Number of obstructions that have been reviewed or aeronautical studies
Number of obstruction lights inoperative
Length of time to correct obstruction light outage
Number of inquiries made to airport or local permitting office for information
(16) Protection of NAVAIDs
Number of inquiries made to airport or local permitting office for information
Length of time a NAVAID outage has occurred
Length of time PAPI/VASI/ [Precision Approach Path Indicator/Visual Approach Slope Indicator] beacon inoperative
Integrity of fencing or access controls
Number of vandalism, theft, or other reports
Number of visits of FAA personnel for NAVAID maintenance/correction
Number of flight checks
(17) Public Protection
Number of persons injured
Number of complaints
Condition of signs or warnings
Provision and condition of fences, gates, or objects preventing inadvertent entry
(18) Wildlife Hazard
Number of strikes reported
Number of sitings
Extent to which the plan is followed
Safety indicators have not been fully determined for this section yet
Recommend airport personnel brainstorm this section
(19) Airport Condition Reporting
Number of different types of NOTAMs (local, distant, FDC, military)
Extent to which NOTAM procedures are followed
Degree to which NOTAM is understood by others
Extent to which NOTAMs are corrected by FSS
Availability of NOTAMs on ATIS or AWOS
Currency of NOTAM records
Length of time NOTAMs are in effect
Number of complaints
(20) Construction and Unserviceable Areas
Number of safety or coordination meetings held
Number and quality of training sessions held
Currency of training records
Knowledge of those who have been trained or authorized
Quality of radio communications
Complaints from tenants/users/ATCT/aircraft
Extent to which the safety plan is followed covering:
ARFF
Amount of time clear routes exist from firefighting and rescue stations.
Notifications when working on water lines or utilities affecting station
Security
Identification of construction personnel and equipment
Security control on temporary gates and relocated fencing
Additional security measures required if TSA Part 1542 is involved
Marking, Lighting, and Signage
Quality of signage/marking and lighting
Correct threshold treatment & appropriate temporary lighting & marking
Installation and maintenance of temporary lighting & marking
Marking/lighting of construction equipment
Marking/lighting of construction areas
Marking and lighting of closed airfield pavement areas
Type of barricades, height, and/or location
Utilities and NAVAIDs
Shutdown and/or protection of airport electronic/visual navigational aids
Location of power & control lines for electronic/visual NAVAIDs
Location of utilities (may reference another sheet)
Provision for temporary utilities and/or immediate repairs

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Example of Safety Indicators and Metrics for 14 CFR Part 139   89  

Aircraft Operations
Suspension or restriction of aircraft activity on airport operations areas
Airport Operations
Additional equipment, vehicles, or personnel needed to maintain safety
Traffic directors/wing walkers, as needed to ensure clearance
Incidents of individuals gaining access to movement/non-movement areas
Number of vehicle-aircraft incidents or deviation
Extent to which ground vehicle operations procedures are followed
Number of runway incursions
Number of times an escort is necessary
Number of times gates left open
Responsibilities
Initiation, currency, and cancellation of NOTAMs
Procedures for ensuring chain of notification and authority to change
safety-oriented aspects of the construction plan
Designation of responsible reps of all involved parties and their availability
Construction Activities
Location of construction personnel parking and transportation to/from site
Location of construction offices
Location of construction plants
Designation of waste areas and disposal
Debris cleanup responsibilities and schedule
Location of haul road(s)
Review phasing for minimizing disruption of aeronautical activity
Coordination of construction activities during winter operations snow plan
Phasing of work
Storage of construction equipment & materials when not in use
Smoke, steam, and vapor controls
Blasting regulation and control
Dust control
(21) Non-complying conditions
Number of notices to the FAA
Number of deviations
Number of NOTAMs
Number of closures of runway/taxiway/apron
After the identification of the metrics and indicators for the selected areas of Part 139, performance standards are to be established
for each area. Given the standards to be maintained, the identification of those measures currently used to track performance will be
identified. Considered in the tracking process will be procedures, policies, documents, and actions that the airport currently uses or
needs to implement as part of its SMS. The identification of tracking capability (or lack thereof) is the GAP analysis to be evaluated.
The development of the SMS will involve the identification of tracking measures as just one component toward the identification of
risk and hazard evaluation and the implementation of safety defenses and actions.

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX E

Sample Poster and Web Display


for Confidential Incident Reporting

(Courtesy of ATL airport)

90

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX F

Sample Policy Requiring


Employee Incident Reporting

(Courtesy of DFW airport)

91  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

92   Airport Incident Reporting Practices

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Sample Policy Requiring Employee Incident Reporting   93  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

94   Airport Incident Reporting Practices

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX G

Sample Safety Policy Requiring


Incident Reporting

(Courtesy of Houston Airport System)

95  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

96   Airport Incident Reporting Practices

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX H

Sample Employee Incident


Report Form

(Courtesy of Houston Airport System)

97  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

98   Airport Incident Reporting Practices

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX I

Example of Computer Incident


Reporting Data Entry Screen

(Courtesy of Houston Airport System)

99  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

100   Airport Incident Reporting Practices

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

APPENDIX J

Sample Safety Metrics Dashboard

(Courtesy of Columbus Regional Airport Authority)

101  

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

102   Airport Incident Reporting Practices

Copyright National Academy of Sciences. All rights reserved.


Airport Incident Reporting Practices

Abbreviations and acronyms used without definitions in TRB publications:


A4A Airlines for America
AAAE American Association of Airport Executives
AASHO American Association of State Highway Officials
AASHTO American Association of State Highway and Transportation Officials
ACI–NA Airports Council International–North America
ACRP Airport Cooperative Research Program
ADA Americans with Disabilities Act
APTA American Public Transportation Association
ASCE American Society of Civil Engineers
ASME American Society of Mechanical Engineers
ASTM American Society for Testing and Materials
ATA American Trucking Associations
CTAA Community Transportation Association of America
CTBSSP Commercial Truck and Bus Safety Synthesis Program
DHS Department of Homeland Security
DOE Department of Energy
EPA Environmental Protection Agency
FAA Federal Aviation Administration
FAST Fixing America’s Surface Transportation Act (2015)
FHWA Federal Highway Administration
FMCSA Federal Motor Carrier Safety Administration
FRA Federal Railroad Administration
FTA Federal Transit Administration
HMCRP Hazardous Materials Cooperative Research Program
IEEE Institute of Electrical and Electronics Engineers
ISTEA Intermodal Surface Transportation Efficiency Act of 1991
ITE Institute of Transportation Engineers
MAP-21 Moving Ahead for Progress in the 21st Century Act (2012)
NASA National Aeronautics and Space Administration
NASAO National Association of State Aviation Officials
NCFRP National Cooperative Freight Research Program
NCHRP National Cooperative Highway Research Program
NHTSA National Highway Traffic Safety Administration
NTSB National Transportation Safety Board
PHMSA Pipeline and Hazardous Materials Safety Administration
RITA Research and Innovative Technology Administration
SAE Society of Automotive Engineers
SAFETEA-LU Safe, Accountable, Flexible, Efficient Transportation Equity Act:
A Legacy for Users (2005)
TCRP Transit Cooperative Research Program
TDC Transit Development Corporation
TEA-21 Transportation Equity Act for the 21st Century (1998)
TRB Transportation Research Board
TSA Transportation Security Administration
U.S. DOT United States Department of Transportation

Copyright National Academy of Sciences. All rights reserved.


TRANSPORTATION RESEARCH BOARD
NON-PROFIT ORG.
500 Fifth Street, NW
U.S. POSTAGE
Washington, DC 20001 PAID
COLUMBIA, MD
PERMIT NO. 88
ADDRESS SERVICE REQUESTED

Copyright National Academy of Sciences. All rights reserved.


90000
ISBN 978-0-309-48032-1
Airport Incident Reporting Practices

9 780309 480321

S-ar putea să vă placă și