eighth workshop on the - School of Computing Science - University ...
La méthode d'assurance qualité est présentée à l'annexe 11; les composantes
...... À l'examen du tableau 3-7, il appert en effet qu'en 1990, les revenus d'emploi
.... À ce sujet, la MRC d'Asbestos prévoit qu'éventuellement, près de 70 nouvelles
...... l'analyse des procédés par les études de sécurité opérationnelle (HAZOP), ...
part of the document
609" Steven J. Landry and Amy R. Pritchett
HYPERLINK \l "_Toc13384610" Predicting Pilot Error: Assessing the Performance of SHERPA PAGEREF _Toc13384610 \h 48
HYPERLINK \l "_Toc13384611" Neville A. Stanton, Mark S. Young, Paul Salmon, Andrew Marshall, Thomas Waldman, Sidney Dekker
HYPERLINK \l "_Toc13384612" Coordination within work teams in high risk environment PAGEREF _Toc13384612 \h 53
HYPERLINK \l "_Toc13384613" Gudela Grote and Enikö Zala-Mezö
HYPERLINK \l "_Toc13384614" Assessing Negative and Positive Dimensions of Safety: A Case Study of a New Air Traffic Controller-Pilot Task Allocation PAGEREF _Toc13384614 \h 63
HYPERLINK \l "_Toc13384615" Laurence Rognin, Isabelle Grimaud, Eric Hoffman, Karim Zeghal
HYPERLINK \l "_Toc13384616" Head-Mounted Video Cued Recall: A Methodology for Detecting, Understanding, and Minimising Error in the Control of Complex Systems PAGEREF _Toc13384616 \h 72
HYPERLINK \l "_Toc13384617" Mary Omodei, Jim McLennan, Alexander Wearing
HYPERLINK \l "_Toc13384618" Tool Support for Scenario-based Functional Allocation PAGEREF _Toc13384618 \h 81
HYPERLINK \l "_Toc13384619" Alistair Sutcliffe, Jae-Eun Shin, Andreas Gregoriades
HYPERLINK \l "_Toc13384620" Time-Related Trade-Offs in Dynamic Function Scheduling PAGEREF _Toc13384620 \h 89
HYPERLINK \l "_Toc13384621" Michael Hildebrandt and Michael Harrison
HYPERLINK \l "_Toc13384622" An Examination of Risk Managers Perceptions of Medical Incidents PAGEREF _Toc13384622 \h 96
HYPERLINK \l "_Toc13384623" Michele Jeffcott and Chris Johnson
HYPERLINK \l "_Toc13384624" User Adaptation of Medical Devices PAGEREF _Toc13384624 \h 105
HYPERLINK \l "_Toc13384625" Rebecca Randell and Chris Johnson
HYPERLINK \l "_Toc13384626" Introducing Intelligent Systems into the Intensive Care Unit: a Human-Centred Approach PAGEREF _Toc13384626 \h 110
HYPERLINK \l "_Toc13384627" M. Melles, A. Freudenthal, C.A.H.M. Bouwman
HYPERLINK \l "_Toc13384628" Evaluation of the Surgical Process during Joint Replacements. PAGEREF _Toc13384628 \h 118
HYPERLINK \l "_Toc13384629" Joanne JP Minekus, Jenny Dankelman
HYPERLINK \l "_Toc13384630" Human Machine Issues in Automotive Safety: Preliminary Assessment of the Interface of an Anti-collision Support System PAGEREF _Toc13384630 \h 125
HYPERLINK \l "_Toc13384631" P.C. Cacciabue, E. Donato, S. Rossano
HYPERLINK \l "_Toc13384632" Designing Transgenerational Usability in an Intelligent Thermostat by following an Empirical Model of Domestic Appliance Usage PAGEREF _Toc13384632 \h 134
HYPERLINK \l "_Toc13384633" Adinda Freudenthal
HYPERLINK \l "_Toc13384634" An Introduction in the Ecology of Spatio-Temporal Affordances in Airspace PAGEREF _Toc13384634 \h 143
HYPERLINK \l "_Toc13384635" An L.M. Abeloos, Max Mulder, René (M.M.) van Paassen
HYPERLINK \l "_Toc13384636" Modelling Control Situations for the Design of Context Sensitive Human-Machine Systems PAGEREF _Toc13384636 \h 153
HYPERLINK \l "_Toc13384637" Johannes Petersen
HYPERLINK \l "_Toc13384638" A Formative Approach to Designing Teams for First-of-a-Kind, Complex Systems PAGEREF _Toc13384638 \h 162
HYPERLINK \l "_Toc13384639" Neelam Naikar, Brett Pearce, Dominic Drumm and Penelope M. Sanderson
HYPERLINK \l "_Toc13384640" Qualitative Analysis of Visualisation Requirements for Improved Campaign Assessment and Decision Making in Command and Control PAGEREF _Toc13384640 \h 169
HYPERLINK \l "_Toc13384641" Claire Macklin, Malcolm J. Cook, Carol S. Angus, Corrine S.G. Adams, Shan Cook and Robbie Cooper
HYPERLINK \l "_Toc13384642" Model-based Principles for Human-Centred Alarm Systems from Theory and Practice PAGEREF _Toc13384642 \h 178
HYPERLINK \l "_Toc13384643" Steven T. Shorrock, Richard Scaife and Alan Cousins
HYPERLINK \l "_Toc13384644" Toward a Decision Making Support of Barrier Removal PAGEREF _Toc13384644 \h 190
HYPERLINK \l "_Toc13384645" Zhicheng Zhang, Philippe Polet, Frédéric Vanderhaegen
HYPERLINK \l "_Toc13384646" The Control Of Unpredictable Systems PAGEREF _Toc13384646 \h 198
HYPERLINK \l "_Toc13384647" Björn Johansson, Erik Hollnagel & Åsa Granlund
HYPERLINK \l "_Toc13384648" Finding Order in the Machine PAGEREF _Toc13384648 \h 205
HYPERLINK \l "_Toc13384649" Mark Hartswood, Rob Procter, Roger Slack, Mark Rouncefield
HYPERLINK \l "_Toc13384650" Accomplishing Just-in-Time Production PAGEREF _Toc13384650 \h 209
HYPERLINK \l "_Toc13384651" Alexander Voß, Rob Procter, Roger Slack, Mark Hartswood, Robin Williams, Mark Rouncefield
HYPERLINK \l "_Toc13384652" Modelling Collaborative Work in UML PAGEREF _Toc13384652 \h 212
HYPERLINK \l "_Toc13384653" Rachid Hourizi, Peter Johnson, Anne Bruseberg, Iya Solodilova
HYPERLINK \l "_Toc13384654" Organisational Improvisation: A Field Study At a Swedish NPP during a Productive-Outage PAGEREF _Toc13384654 \h 215
HYPERLINK \l "_Toc13384655" Vincent Gauthereau & Erik Hollnagel
HYPERLINK \l "_Toc13384656" Centralised vs. Distributed Alarm Handling PAGEREF _Toc13384656 \h 219
HYPERLINK \l "_Toc13384657" Kenneth Gulbrandsøy and Magnus Reistad
HYPERLINK \l "_Toc13384658" Is Overcoming of Fixation Possible? PAGEREF _Toc13384658 \h 222
HYPERLINK \l "_Toc13384659" Machteld Van der Vlugt, Peter A. Wieringa,
HYPERLINK \l "_Toc13384660" Supporting Distributed Planning in a Dynamic Environment: HYPERLINK \l "_Toc13384661" An Observational Study in Operating Room Management PAGEREF _Toc13384661 \h 225
HYPERLINK \l "_Toc13384662" Jos de Visser, Peter A. Wieringa, Jacqueline Moss, Yan Xiao
HYPERLINK \l "_Toc13384663" Virtual Reality as Enabling Technology for Data Collection of Second-Generation Human Reliability Methods PAGEREF _Toc13384663 \h 228
HYPERLINK \l "_Toc13384664" S. Colombo
Learning and Failure in Human Organisations
.
...232
Darren DalcherWorkshop Timetable
Monday 15th June
09.30-10.00
C. Johnson
Welcome and Introduction.
10.00-11.00
Chair:
A. Pritchett, Georgia Institute of Technology
Error Detection in AviationCrossing the Boundaries of Safe Operation
N. Naikar & A. Saunders, Defence Science and Technology Organisation, Australia. Activity Tracking for Pilot Error Detection From Flight DataTodd J. Callantine, San Jose State University/NASA Ames Research Center, USA11.00-11.30Coffee
11.30-13.00
Chair:
A. Sutcliffe, UMIST, Pilot Cognition
Development and Preliminary Validation of a Cognitive Model of Commercial Airline Pilot Threat Management BehaviourS. Banbury, Cardiff Univ, H. Dudfield, QinetiQ, M. Lodge, British Airways, UK. Pilot Control Behaviour in Paired ApproachesSteven Landry and Amy Pritchett, Georgia Institute of Technology, USA. Predicting Pilot Error: Assessing the Performance of SHERPA N. Stanton, M. S. Young, P. Salmon, D. Harris, J. Demagalski, A. Marshall, T. Waldman, S. Dekker.13.00-14.30Lunch14:30-15:30
Chair:
T. J. Callantine, San Jose State Univ./NASA Ames Research Center
Crew and Team-based Interaction in Aviation and Fire Fighting
Coordination within Work Teams in High-Risk Environment, Effects of StandardisationG. Grote and E. Zala-Mezö, Swiss Federal Institute of Technology, Zurich (ETH). Assessing Negative and Positive Dimensions of Safety: A Case Study of a New Air Traffic Controller-Pilot Task Allocation.L. Rognin, I. Grimaud, E. Hoffman, K. Zeghal, EUROCONTROL & CRNA, France. Head-Mounted Video Cued Recall: A Methodology for Detecting, Understanding and Minimising Error in the Control of Complex SystemsM. Omodei, Latrobe Univ., J. McLennan, Swinburn Univ. of Technology, A. Wearing, Univ. of Melbourne, Australia.15.30-16:00Tea16:00-17:30
Chair:
E. Hollnagel, Univ. of Linkoping, SwedenFunction Allocation and the Perception of Risk Tool Support for Scenario Based Function AllocationA. Sutcliffe, J.-E. Shin, A. Gregoriades, UMIST, UK. Time-Related Trade-Offs in Dynamic Function SchedulingM. Hildebrandt, M. Harrison, University of York, UK. An Examination of Risk Manager's Perceptions of Medical IncidentsM. Jeffcott, C. Johnson, University of Glasgow, UK
Tuesday 16th June
09.00-09.30
C. JohnsonPoster Summaries
09.30-11.00
Chair:
S. Bogner,
Inst. for Study of Medical Error, USAIntensive Care and Surgery
User Adaptation of Medical Devices: The Reality and the Possibilities Rebecca Randell and C. Johnson, University of Glasgow. Intelligent Systems in the Intensive Care Unit: A Human Centred Approach M. Melles, A. Freudenthal, Delft Univ. of Technology, C.A.H.M. Bouwman, Groningen University Hospital, Netherlands. Evaluation of the Surgical Process During Joint Replacements J.P. Minekus, J. Dankelman, Delft University of Technology, Netherlands11.00-11.30Coffee11.30-13.00
Chair:
P. Sanderson,
Univ. of Queensland, Australia. Constraints and Context Sensitivity in ControlPreliminary Assessment of the Interface of an Anti-Collision Support SystemP.C. Cacciabue, E. Donato, S. Rossano, EC Joint Research Centre, Italy Designing Transgenerational Usability in an Intelligent Thermostat by Following an Empirical Model of Domestic Appliance UsageA. Freudenthal Delft University of Technology, Netherlands. Introduction in the Ecology of Spatio-Temporal Affordances in AirspaceAn L.M. Abeloos, M. Mulder, M.M. van Paassen, Delft Univ. of Technology. Modelling Control Situations for the Design of Context-Sensitive SystemsJ. Petersen, Technical University of Denmark, Denmark. 13.00-14.30Lunch14:30-15:30
Chair:
P. C. Cacciabue, EC Joint Research Centre, ItalyTeam Coordination and CompetenceA Formative Approach to Designing Teams for First-of-a-Kind, Complex SystemsN. Naikar, B. Pearce, D. Drumm, DSTO, P. M. Sanderson, Univ. of Queensland.
Crew Competence in Bulk CarriersSteve Harding, UK Maritime and Coastguard Agency, UK. Qualitative Analysis of Visualisation Requirements for Improved Campaign Assessment and Decision Making in Command and ControlC. Macklin, S. Cook, QinetiQ, M. Cook, C. Angus, C. Adams, R. Cooper, Univ of Abertay.15.30-16:00Tea16:00-17:00
Chair:
P. Wieringa, Delft University of Technology.Alarms, Barriers and DefencesModel-Based Principles for Human-Centred Alarm PrinciplesS.T. Shorrock, Det Norske Veritas, UK, R. Scaife, A. Cousins, NATS, UK. Toward a Decision Making Support of Barrier RemovalZ. Zhang, P. Polet, F. Vanderhaegen University of Valenciennes, France. The Control of Unpredictable SystemsB. Johansson, E. Hollnagel & Å. Granlund, University of Linköping, Sweden.17:00-17.15Close, hand-over to HYPERLINK "http://www.ida.liu.se/conferences/EAM2003/" EAM 2003, presentation of HYPERLINK "http://www.dcs.gla.ac.uk/~johnson/eam2002/Bestpaper.htm" best presentation award.
Restaurants in the Local Area
Crossing the Boundaries of Safe Operation: Training for Error Detection and Error Recovery
Neelam Naikar and Alyson Saunders,
Defence Science and Technology Organisation
PO Box 4331, Melbourne, VIC 3001, Australia; neelam.naikar@dsto.defence.gov.au
Abstract: Widespread acceptance that human error is inevitable has led to the recognition that safety interventions should be directed at error management as well as error prevention. In this paper we present a training approach for helping operators manage the consequences of human error. This approach involves giving operators the opportunity to cross the boundaries of safe operation during training and to practise problem solving processes that enable error detection and error recovery. To identify specific requirements for training, we present a technique for analysing accidents/incidents that examines the boundaries that operators have crossed in the past and the problem solving difficulties they have experienced. This information can then be used to specify the boundaries that operators should be given the opportunity to cross during training and the problem solving processes they should practise. Initial applications of this approach have been encouraging and provide motivation for continuing further work in this area.
Keywords: human error, training, error management.
Introduction
There is widespread acceptance in the aviation community that human error is inevitable (Hollnagel, 1993; Maurino, 2001; Reason, 2000; 2001; Sarter & Alexander, 2000; Shappell & Wiegmann, 2000; Woods, Johannesen, Cook, & Sarter, 1994). An examination of any incident database will reveal a proliferation of errors involving, for example, incorrect switch selections, inadequate scanning of instruments, and inadequate cross-checking and monitoring. These kinds of errors are difficult to reduce or eliminate completely. The consensus therefore is that we must move beyond error prevention to helping aircrew manage the consequences of human error. Currently, safety interventions directed at error management include the design of error-tolerant systems (Noyes, 1998; Rasmussen, Pejtersen & Goodstein, 1994) and Crew Resource Management (Helmreich, Merritt & Wilhelm, 1999).
In this paper, we introduce a new approach for training aircrew to manage human error. This approach recognises, first, that although errors are inevitable, accidents are not. Second, although humans often make errors that threaten system safety, their ability to adapt to dynamic situations also makes them one of the most important lines of defence in averting an accident or incident once an error has occurred (Reason, 2000).
Interestingly, our observations of training in the Australian military indicate that they are rather good at training aircrew to manage equipment errors or failures (Lintern & Naikar, 2000). Many simulator-based training sessions involve the presentation of equipment failures to aircrew so that they can practise dealing with the malfunctions. Hence, aircrew develop well rehearsed processes for dealing with equipment failures if they occur on real missions. In contrast, little effort is directed at training aircrew to manage the consequences of human error. Instead, as is probably the case in many other organisations, the emphasis to date has been on training aircrew not to make errors in the first place. However, almost 80% of aircraft accidents are said to be caused by human error and many of these errors are difficult to eliminate completely.
The training approach that we have been developing to help aircrew manage the consequences of human error is based on Jens Rasmussens conceptualisation of work systems as having boundaries of safe operation (Amalberti, 2001; Hollnagel, 1993; Rasmussen et al., 1994). Accidents or incidents can occur when operators cross these boundaries by making errors. However, crossing the boundaries is inevitable. That is, errors will occur on occasion. The emphasis in training therefore must be on error detection and error recovery. In particular, in the case of simulator-based training, operators must be given the opportunity to cross the boundaries of safe operation in the training simulator and to practise detecting and recovering from crossing these boundaries (Figure 1). Then, if the operators cross these boundaries during real operations, they are more likely to detect and recover from the error, and consequently avert an accident or incident.
Figure 1: Illustration of a work system as having boundaries of safe operation.
Applying this training approach can lead to novel ways of training. For example, consider the training of procedures in both commercial and military aviation. The most common practice is for aircrew to be drilled in executing the steps of a procedure until, it is hoped, they remember it and get it right every time. But a slip or lapse in executing some part of a procedure is inevitable, as an examination of any accident or incident database will show. Applying the training approach that we have developed implies that rather than simply drilling aircrew in executing procedures to minimise the chances of error, aircrew must also be given training in dealing with the situation that evolves if they make an error in executing a procedure. Thus, at least in a training simulator, aircrew should be given the opportunity to not follow a procedure or parts of a procedure, and to practise dealing with and recovering from the situation that evolves.
Some evidence of this training approach may be found in the Australian military, which indicates that it may have validity in operational settings. For example, aircrew are sometimes asked to place the aircraft in an unusual attitude and to practise recovering from this position. While this is very worthwhile, one of the problems in real operations is detecting when the aircraft is in an unusual attitude in the first place. So, our training approach would require that aircrew are also given practice at detecting unusual attitudes. In other cases, aircrew are given practice at detecting errors but not at recovering from errors. For example, a flying instructor acting as a navigator in a two-person strike aircraft may deliberately enter a wrong weapons delivery mode and check to see if the pilot detects the error. If the pilot does not detect the error, the flying instructor will alert the pilot to the situation and the weapons delivery mode will usually be corrected prior to the attack on the target. In contrast, the training approach we present here would require that the flying instructor leave the error uncorrected to give the pilot further opportunity to learn to recognise the cues that an error has occurred, and to practise dealing with the evolving situation.
In order to determine whether this training approach can be usefully applied to military aviation, we have started developing a systematic approach for identifying training requirements for managing human error. This approach relies on an analysis of accidents and incidents to examine the boundaries of a workspace that aircrew have crossed in the past, and the problem solving difficulties that aircrew have experienced in crossing these boundaries. Subsequently, this analysis can be used to identify requirements for training scenarios in terms of the boundaries that aircrew should be given the opportunity to cross in a training simulator, and the problem solving processes they should practise to enable error detection and error recovery.
Identifying Training Requirements for Managing Human Error
In this section, we present a technique for identifying training requirements for managing human error from an analysis of aircraft accidents. This approach involves three main steps. The first step is to identify the critical points in an accident. The second step is to use Rasmussens decision ladder formalism to examine aircrew problem solving at each of the critical points. The third step is to generate training requirements to manage human error from the preceding analysis. We illustrate each of these steps by example below.
Identifying Critical Points: A critical point in an accident may be described as a crew action/non-action or a crew decision/non-decision, usually in response to an external or internal event, that threatens system safety. To illustrate, consider a hypothetical accident involving an F-111 aircraft in which the pilot executes a manoeuvre manually without first disengaging the autopilot. The pilot experiences difficulty executing the manoeuvre because the autopilot is fighting him for control of the aircraft. However, the pilot does not perform any corrective action. The autopilot disengages and produces an autopilot fail tone but the pilot fails to respond to the tone. As the autopilot disengages while the pilot is exerting high stick forces in order to gain control of the aircraft, the aircraft is thrown into a hazardous attitude and then hits the ground.
In this accident, the first critical point involves the pilot executing a manoeuvre manually without disengaging the autopilot and then failing to perform any corrective action. The second critical point occurs when the pilot does not respond to the autopilot fail tone. The third critical point occurs when the pilot is unable to recover from the hazardous aircraft attitude.
Examining Aircrew Problem Solving: This step involves using Rasmussens decision ladder formalism to examine aircrew problem solving at each critical point. We chose the decision ladder over more traditional models of information processing because: all of the steps in the decision ladder need not be followed in a linear sequence; the decision ladder accommodates many starting points; and the decision ladder accommodates shortcuts, or shunts and leaps, from one part of the model to another. Thus, the decision ladder is a suitable template for modelling expert behaviour in complex work systems (see Vicente, 1999 for an extended discussion of these arguments). The decision ladder has also previously been used in accident analysis for classifying errors (OHare, Wiggins, Batt & Morrison, 1994; Rasmussen, 1982).
Using the decision ladder (Figure 2), the problem solving processes of aircrew at each critical point are analysed in terms of observation of information, situation analysis, goal evaluation, and planning and execution. The aim is to understand why the aircrew may have responded as they did. To do this, we have found it useful to prompt ourselves with the following questions about the aircrews behaviour:
Is it possible that the crew did not observe critical information?
Is it possible that the crew had difficulty diagnosing the situation?
Is it possible that the crew gave precedence to alternative goals?
Is it possible that the crew had difficulty defining the tasks and resources required for dealing with the situation?
Is it possible that the crew had difficulty selecting or formulating procedures for dealing with the situation?
Is it possible that the crew did not execute the procedure as intended?
Figure 2: A decision ladder for a hypothetical accident.
Figure 2 shows a decision ladder for the first critical point in the hypothetical accident we presented earlier. The annotations in the figure are presented as answers to the questions posed above, with the numbers representing the order in which the decision ladder should be followed. From this representation, we begin to understand that the pilot had difficulty in detecting the error he had made (i.e. that is that he had tried to execute a manoeuvre manually without disengaging the autopilot). More specifically, we can see that the difficulty he had in detecting the error was not because he did not observe the critical information (he was aware that he was finding it hard to execute the manoeuvre manually) but rather because he was unable to diagnose why he was finding it hard to execute the manoeuvre manually. If the pilot had made the correct diagnosis, he would have probably detected the error (that is, realised that he had forgotten to disengage the autopilot), and consequently corrected the error (that is, disengaged the autopilot).
Generating Training Requirements: The final step involves generating requirements for training on the basis of the preceding analysis. In particular, the elements of the preceding analysis that are relevant for informing training requirements include the boundaries that were crossed by aircrew, and the problem solving difficulties that they experienced. The requirements can then be structured in terms of the boundaries that aircrew should be given the opportunity to cross during training, and the problem solving processes that they should practise to enable error detection and error recovery.
From the analysis of the hypothetical accident presented above, the training requirement would be to give aircrew the opportunity to fly the aircraft manually with the autopilot engaged so that they can experience how the aircraft would respond. In addition, aircrew should practise disengaging the autopilot while they are controlling the aircraft manually. Then, if the aircrew forget to disengage the autopilot before executing a manoeuvre manually on a real mission, they are more likely to diagnose the error and recover from it successfully. This kind of training intervention is consistent with theories of naturalistic decision making which recognise that under time-critical, high-workload conditions, experts can make quick and effective decisions by matching situations to pre-existing templates of diagnoses and solutions that have worked in the past (Klein, 1993).
Application of Technique
Aircraft Accidents: So far we have applied the technique that we have developed to three F-111 accidents in the Royal Australian Air Force (the F-111 is a two-person strike aircraft). The accident data that was necessary for conducting the analysis was readily available in reports of the Accident Investigation Teams and Boards of Inquiry. Recordings of cockpit activity in the accident aircraft were particularly valuable for constructing the decision-ladder models of aircrew problem solving.
Examining the accident data was the most time consuming component of the analysis. It took between three to five days to examine the data for each accident (depending on the amount of information that was available about each accident). Once the accident data had been examined, it took approximately a day to complete the first step of the technique, two days to complete the second step, and a day to complete the third step.
Our analyses of the three aircraft accidents resulted in 6 training requirements. To assess the usefulness of the technique we interviewed 7 F-111 aircrew and 7 F-111 training instructors. Some of the questions we asked them included: (1) whether they already conducted the training suggested; (2) whether the training suggested was useful; and (3) whether they had been in an unsafe situation that was similar to the one that had resulted in the training requirement. We are still in the process of analysing the interview transcripts in detail, but from a preliminary examination of the transcripts it appears that they do not conduct the training suggested, that they thought the training suggestions were useful, and that they had previously been in similar unsafe situations.
Aircraft Incidents: We are also in the process of applying the technique we have developed to F-111 incidents. We have found that the data necessary for analysing the incidents is generally not available in the incident reports that have been filed by aircrew. To resolve this problem, we will interview aircrew about the incidents they have reported using a technique called Critical Decision Method (Klein, Calderwood & MacGregor, 1989). This technique allows interviewers to gradually shift aircrew from an operational description of the incident, which is the language that aircrew are most accustomed to speaking in, to a description of the problem solving processes that were behind the incident.
Our initial attempt at using the Critical Decision Method involved very little adaptation of the technique as it is described in Klein et al. (1989) and Hoffman, Crandall & Shadbolt (1998). Briefly, aircrew were asked to provide a general description of the incident followed by a more detailed account of the sequence of events in the incident. The interviewer and the aircrew then established a timeline for the incident and identified the critical points in the incident. Following that, the interviewer used a number of probes to elicit more detailed information from aircrew about the problem solving processes at each of the critical points in the incident. The probes were much the same as those described in Klein et al. (1989) and Hoffman et al. (1998).
On reviewing the interview transcripts we discovered that we had not fully captured the information we needed to develop decision-ladder models of the incidents, and consequently to identify training requirements. In addition, a significant difference in analysing incidents, as opposed to accidents, is that the aircrew who were involved can provide valuable information about how they actually detected and recovered from the error. Thus, we needed to interview aircrew not only about the problem-solving difficulties that led them to cross the boundaries of safe operation but also about the problem solving processes that enabled error detection and error recovery.
In other words, for each incident, the interviewer should focus on at least three critical points. These critical points involve the: (1) error; (2) error detection; and (3) error recovery. The interviewer should use general probes to prompt free recall of the aircrews experiences at each critical point, followed by specific probes where necessary to elicit the information required for constructing decision-ladder models. Table 1 illustrates some of the general and specific probes that may be useful at each critical point. The specific probes are organised according to the parts of the decision ladder that the information elicited is relevant to. In the last two columns, the cells that are filled indicate that error detection processes are represented on the left side of the decision ladder whereas error recovery processes are represented at the top part and on the right side of the decision ladder.
Table 1: General and specific probes for interviewing aircrew about errors, error detection, and error recovery.
Parts of the decision ladder (except for first cell)Error: Error detection: Error recovery: General probesWhat when wrong?How did you detect the error?How did you react or recover from the error?Observation of informationWhat information did you have about the situation?What cues alerted you that something was wrong?DiagnosisWhat was your assessment of the situation at this point?What was your assessment of what had gone wrong?Goal evaluationWhat were your specific goals at this time?
What other options did you consider?
Why did you select this option/reject other options?What were your specific goals at this time?
What other options did you consider?
Why did you select this option/reject other options?Definition of tasks and resourcesWhat was your plan for achieving your goals?
What was your plan for recovering from the situation?Formulation and selection of proceduresWere there procedures for dealing with the situation? What were the steps of your plan?Were there procedures for recovering from this situation? What were the steps of your recovery plan?ExecutionWhat tasks or actions did you carry out?What tasks or actions did you carry out?
After the relevant information at each critical point has been obtained, the following probes may be useful for uncovering information for developing error management strategies: (1) In hindsight, could you have detected the error earlier and, if so, how?; (2) In hindsight, could you have recovered more effectively/efficiently from the error and, if so, how?; (3) In hindsight, is there anything you could have done to prevent the error from occurring and, if so, what?; (4) In hindsight, why do you think the error occurred?
We will trial this interview protocol in the near future. Further development and refinement of the probes may be necessary. Later, it may be worth considering how to design reporting templates for an incident database so that the information needed to develop training requirements for error detection and error recovery is captured.
Implementation of Training: Many of our training requirements must be implemented in training simulators rather than in real aircraft. This is not surprising because our training approach requires that aircrew cross the boundaries of safe operation and it would be dangerous to do this in real aircraft. The challenge that we face is that the training simulators that are available may not have the necessary capability for supporting some of our training requirements. One solution is to document these training requirements for future simulation acquisitions.
Another option is to explore alternative techniques for training. Reason (2001) reported that the ability of surgical teams to deal with adverse incidents depended in part on the extent to which they had mentally rehearsed the detection and recovery of their errors. Thus, one possibility we are exploring is the use of mental rehearsal techniques, perhaps with the aid of PC-based visualisation tools, to give aircrew experience in crossing the boundaries and in detecting and recovering from crossing the boundaries.
In addition, in a study examining the role of prior cases in pilot decision making, OHare and Wiggins (2002) found that written materials were an important source of remembered cases. They also suggested that PC-based simulations may also be effective candidates for case-based training systems. The approach we have presented in this paper may be useful for the preparation of cases for training, in particular, for providing information about the boundaries that were crossed by aircrew, the problem solving difficulties that they experienced, and the problem solving processes that would enable error detection and error recovery.
Conclusion
In this paper, we have described a new approach for training aircrew to manage human error. In addition, we have presented a technique for analysing aircraft accidents and incidents to identify specific requirements for training aircrew in error detection and error recovery. Initial applications of this approach have been encouraging and provide motivation for continuing further work in this area.
Acknowledgements
We thank the Strike Reconnaissance Group of the Royal Australian Air Force for sponsoring this work and the Directorate of Flying Safety of the Royal Australian Air Force for their support. We also thank Lee Horsington, Dominic Drumm, and Anna Moylan from the Defence Science and Technology Organisation for their assistance on this project. In addition, we thank Jim McLennan from Swinburne University of Technology for conducting some of the initial interviews with aircrew; and Gary Klein and Laura Militello of Klein Associates for their advice on the Critical Decision Method.
References
Amalberti, R. (2001). The paradoxes of almost totally safe transportation systems. Safety Science, 37, 109-126.
Helmreich, R.L., Merritt, A.C., & Wilhelm, J.A. (1999). The evolution of crew resource management in commercial aviation. International Journal of Aviation Psychology, 9, 19-32.
Hoffman, R.R., Crandall, B., & Shadbolt, N. (1998). Use of the critical decision method to elicit expert knowledge: A case study in the methodology of cognitive task analysis. Human Factors, 40(2), 254-276.
Hollnagel, E. (1993). The phenotype of erroneous actions. International Journal of Man-Machine Studies, 39, 1-32.
Klein, G. A. (1993). Naturalistic decision making: Implications for design. Report CSERIAC SOAR 93-1. Ohio: Crew Systems Ergonomics Information Analysis Center.
Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method of eliciting knowledge. IEEE Transactions on Systems, Man and Cybernetics, 19, 462-472.
Lintern, G., & Naikar, N. (2001). Analysis of crew coordination in the F-111 mission. DSTO Client Report (DSTO-CR-0184). Aeronautical and Maritime Research Laboratory: Melbourne, Australia.
Maurino, D. (2001). At the end of the parade. Flight Safety Magazine, Jan/Feb 2001, pp.36-39.
Noyes, J.M. (1998). Managing errors. In Proceedings of the UKACC International Conference on Control, pp.578-583. London: Institution of Electrical Engineers.
OHare, D., & Wiggins, M. (2002). Remembrance of cases past: Who remembers what, when confronting critical flight events. Unpublished manuscript.
OHare, D., Wiggins, M., Batt, R., Morrison, D. (1994). Cognitive failure analysis for aircraft accident investigation. Ergonomics, 37(1), 1855-1869.
Rasmussen, J. (1982). Human errors: A taxonomy for describing human malfunction in industrial installations. Journal of Occupational Accidents, 4, 311-333.
Rasmussen, J., Pejtersen, A.M., & Goodstein, L.P. (1994). Cognitive Systems Engineering. New York: John Wiley & Sons.
Reason, J. (2000). Human error: Models and management. British Medical Journal, 320, 768-770.
Reason, J. (2001). The benign face of the human factor. Flight Safety Magazine, Jan/Feb 2001, pp. 28-31.
Sarter, N.B., & Alexander, H.M. (2000). Error types and related error detection mechanisms in the aviation domain: An analysis of aviation safety reporting system incident reports. International Journal of Aviation Psychology, 10, 189-206.
Shappell, S.A., & Wiegmann, D.A. (2000). The Human Factors Analysis and Classification System HFACS. Report DOT/FAA/AM-00/7. Springfield: National Technical Information Service.
Vicente, K.J. (1999). Cognitive Work Analysis. New Jersey: Lawrence Erlbaum Associates.
Woods, D.D., Johannesen, L.J., Cook, R.I., & Sarter, N.B. (1994). Behind human error: Cognitive systems, computers and hindsight. Report CSERIAC SOAR 94-01. Ohio: Crew Systems Ergonomics Information Analysis Center.
Activity Tracking for Pilot Error Detection from Flight Data
Todd J. Callantine,
San Jose State University/NASA Ames Research Center, MS 262-4, Moffett Field, CA 94035, USA.
HYPERLINK mailto:tcallantine@mail.arc.nasa.gov tcallantine@mail.arc.nasa.gov
Abstract: This paper presents an application of activity tracking for pilot error detection from flight data. It describes the Crew Activity Tracking System (CATS), in-flight data collected from the NASA Langley Boeing 757 Airborne Research Integrated Experiment System aircraft, and a model of B757 flight crew activities. It then presents an example of CATS detecting actual in-flight crew errors.
Keywords: human error detection, activity tracking, glass cockpit aircraft
Introduction
This paper describes an application of the Crew Activity Tracking System (CATS) that could contribute to future efforts to reduce flight crew errors. It demonstrates how CATS tracks crew activities to detect errors, given flight data and air traffic control (ATC) clearances received via datalink. CATS implements a so-called intent inference technology, called activity tracking, in which it uses a computational engineering model of the operators task, together with a representation of the current operational context, to predict nominally preferred operator activities and interpret actual operator actions.
CATS was originally implemented to track the activities of Boeing 757 (B757) glass cockpit pilots, with a focus on automation mode errors (Callantine and Mitchell, 1994). The CATS activity tracking methodology was validated as a source of real-time knowledge about B757 automation usage to support a pilot training/aiding system (Callantine, Mitchell, and Palmer, 1999). CATS has since proven useful as an analysis tool for assessing how operators use procedures developed to support new operational concepts (Callantine, 2000). It also serves as a framework for developing agents to represent human operators in incident analyses and distributed simulations of new operational concepts (Callantine, 2001a).
The research described here draws in large part from these earlier efforts. In particular, the CATS model of B757 flight crew activities has been expanded and refined. The representation of operational context used to reference the model to predict nominally preferred activities has similarly undergone progressive refinement. And, while the idea of using CATS to detect flight crew errors from flight data is not new, this paper presents an example of CATS detecting a genuine, in-flight crew error from actual aircraft flight data.
Using CATS to detect errors from flight data has several potential benefits (Callantine, 2001b). First, CATS provides information about procedural errors that do not necessarily result in deviations, and therefore would not otherwise be reported (cf. Johnson, 2000). Second, CATS enables airline safety managers to automatically incorporate information about a detected error into a CATS-based training curriculum. Other pilots could relive a high-fidelity version of the context in which another crew erred. Increasing the efficiency and fidelity of information transfer about errors to the pilot workforce in this way would likely yield safety benefits. A safety-enhancement program that uses CATS to detect errors would improve training by requiring safety and training managers to explicate policies about how an aircraft should preferably be flown.
The paper is organized as follows. It first describes the CATS activity tracking methodology, and information flow in CATS. The paper then describes a CATS implementation for detecting pilot errors. It first describes flight data obtained for this demonstration from the NASA Langley B757 Airborne Research Integrated Experiment System (ARIES) aircraft. It next describes two key representations. The first is a portion of a CATS model of B757 flight operations. The second is a representation of the constraints conveyed by ATC clearances that plays a key role in representing the current operational context (Callantine, 2002b). An example from the available flight data then illustrates CATS detecting pilot errors. The paper concludes with a discussion of future research challenges. A lengthier report on this research appears in Callantine (2002a).
Activity Tracking
Activity tracking is not merely the detection of operational deviations (e.g., altitude below glidepath). The activity tracking methodology involves first predicting the set of expected nominal operator activities for the current operational context, then comparing actual operator actions to these predictions to ensure operators performed correct activities. In some situations, various methods or techniques may be acceptable; therefore the methodology also includes a mechanism for determining that, although operator actions do not match predictions exactly, the actions are nonetheless correct. In this sense, CATS is designed to track flight crew activities in real time and understand that they are error-free. As the example below illustrates, errors CATS detects include those that operators themselves detect and rapidly correct; such errors may nonetheless be useful to examine.
CATS identifies two types of errors: errors of omission, and errors of commission. It further identifies errors of commission that result when the right action is performed with the wrong value. CATS does not base these determinations on a formulaic representation of how such errors would appear in a trace of operator activities, nor attempt to further classify errors (e.g., reversals). This is because the CATS model does not represent the steps of procedures explicitly as step A follows step B; instead it represents procedures implicitly by explicitly specifying the conditions under which operators should preferably perform each action. CATS predicts concurrent actions whenever the current context satisfies conditions for performing two or more activities. CATS interprets concurrent actions whenever the granularity of action data identifies them as such.
Like analysis techniques that rely on a reflection of the task specification in a formal model of a system (e.g., Degani and Heymann, 2000), CATS relies on a correctly functioning system to reflect the results of actions (or inaction) in its state. CATS identifies errors by using information in the CATS model that enables it to assess actions (or the lack thereof, in the case of omissions) in light of the current operational context and the future context formed as a result of operator action (or inaction). Thus, one might view the CATS error detection scheme as closing the loop between a representation of correct task performance and the controlled system, and evaluating feedback from the controlled system to ensure it jibes with correct operator activities. Given that the system is operating normally and providing good data, this is a powerful concept.
Crew Activity Tracking System (CATS): Figure 1 generically depicts information flow in CATS, between a controlled system and CATS, and between CATS and applications based on it. CATS uses representations of the current state of the controlled system and constraints imposed by the environment (including performance limits on the controlled system) to derive the current operational context. CATS then uses this representation to generate predictions from its model of operator activities. CATS compares detected operator actions to its predicted activities, and it assesses actions that it cannot immediately interpret as matching a prediction by periodically referencing the activity model until it receives enough new context information to disambiguate possible interpretations.
CATS Implementation for Flight Data Error Detection
The following subsections specifically describe the implementation of CATS for detecting pilot errors from flight data. The first is devoted to the flight data itself. The second illustrates a portion of the CATS model, and the third describes how CATS generates the current operational context using a representation of ATC clearance constraints. The CATS model fragment includes portions relevant to an example of CATS detecting pilot errors presented in the fourth subsection. The following subsections all assume some knowledge of commercial aviation and a B757-style autoflight system. A detailed description of the Boeing 757 autoflight system mode usage is provided in Callantine, Mitchell, and Palmer (1999); see Sarter and Woods (1995), and Wiener (1989) for discussions of mode errors and automation issues.
B757 ARIES Flight Data: The NASA Langley B757 ARIES aircraft, with its onboard Data Acquisition System (DAS), provided the flight data for this research (Figure 2). The DAS collects data at rates in excess of 5 Hz, using onboard computers that perform sensor data fusion and integrity checking. In future applications such functionality may be required within CATS, so that data can be acquired directly from aircraft data busses. Table 1 shows the collection of values that comprise the data set. The data include information from important cockpit systems. The rightmost column of Table 1 shows data CATS derives from the sampled values using filters. Included are crew action events CATS derives from the values of control states. Target value settings on the MCP are derived with begin and end values, as in formal action specification schemes (cf. Fields, Harrison, and Wright, 1996). The present error-detection application focuses on interactions with the autoflight system MCP, so it only uses some of the available data. Also, for the present application, cockpit observations provide required clearance information.
CATS Model of B757 Navigation Activities: Figure 3 depicts a fragment of the CATS model used to detect errors from B757 ARIES data. The model decomposes the highest level activity, fly glass cockpit aircraft, into sub-activities as necessary down to the level of pilot actions. Figure 3 illustrates eight actions. All actions derivable from the data are included in the full model. Each activity in the model is represented with conditions that express the context under which the activity is nominally preferred, given policies and procedures governing operation of the controlled system. The parenthesized numbers in Figure 3 refer to Table 2, which lists the and-or trees that comprise these rules. For comparison to other work that considers human errors involved with CDU manipulations (e.g., Fields, Harrison, and Wright, 1997), the model fragment in Figure 3 shows just one of numerous FMS configuration tasks. However, because the B757 ARIES flight data do not include CDU data, modeling these tasks is not relevant to the present application.
Table 1 Available B757 ARIES data, including derived states and action events (rightmost column). The B757 ARIES DAS collects some variables from multiple sources.
Representation of ATC Clearance Constraints for Context Generation: Environmental constraints play a key role in defining the goals that shape worker behavior in complex sociotechnical systems (Vicente, 1999). CATS also relies on a representation of environmental constraints to construct a representation of the current operational context (see Figure 1). These factors motivated recent research on an object-oriented representation of the constraints ATC clearances impose on flight operations (Callantine, 2002b). Figure 4 shows the representation, which represents three key dimensions of constraints: vertical, lateral, and speed. CATS employs a rule base that enables it modify this constraint representation to reflect the constraints imposed (or removed) by each new ATC clearance.
Error Detection Example: The paper now presents an example of CATS detecting errors from B757 ARIES flight data collected during actual flight test activities. (A series of snapshots, including some of the entire CATS interface, illustrate the example.) Although the data are real, in the flight test environment, strict procedures about how the pilots should preferably fly the airplane are unreasonable. Nonetheless, by imposing the model depicted in part in Figure 3, CATS was able to detect errors, and the errors were not contrived. While the errors CATS detects are insignificant, because they in no way compromised safety, the exercise nonetheless demonstrates the viability of CATS for error detection. On the SUN Blade1000"! test platform, the CATS Java"! code processes the flight data at approximately between twelve and twenty-two times real time.
Figure 5 shows the CATS interface at the start of the scenario (Scenario Frame 1). The crew has just received a clearance to "climb and maintain 16,000 feet." CATS modifies its representation of ATC clearance constraints accordingly, and using the updated context, predicts that the crew should set the new target altitude on the MCP by dialing the MCP altitude knob.
In Scenario Frame 2 (Figure 6), a pilot instead pushes the VNAV switch. Because CATS has not predicted this action, it cannot interpret the action initially. CATS instead continues processing data. In Scenario Frame 3 (Figure 7), CATS has received enough new data to interpret the VNAV switch press action. Had the action been correct, the autoflight system would have reflected this by engaging the VNAV mode and commencing the climb. However, VNAV will not engage until a new target altitude is set. To assess the VNAV switch press with regard to the current context, in which airplane is still in ALT HOLD mode at 12,000 feet, CATS searches its model to determine if any parent activities of the VNAV switch press contain information linking the action to a specific context. CATS finds that the engage VNAV activity should reflect VNAV mode engagement in the current context (see Figure 3). Because this is not the case, CATS flags the VNAV switch press as an error. Meanwhile, CATS still expects the crew to dial the MCP altitude knob.
.
CATS detects a second FL CH switch press in Scenario Frame 7 (Figure 11). Perhaps a pilot performed this action as insurance to engage a mode to begin the climb. Because FL CH mode engages, and this is reflected in CATS representation of the current context, CATS interprets both FL CH switch presses as correct acceptable alternative actions. By this time, CATS has also flagged the second VNAV switch press as an error. In the final frame of the scenario (Scenario Frame 8, Figure 12), the aircraft has begun climbing in FL CH mode. At this point the crew opts to engage VNAV mode. At last, CATS detects the predicted VNAV switch press and interprets it as correct.
Conclusions and Future Research
The above example demonstrates that CATS can detect errors from flight data. Although the errors CATS detects are inconsequential, this research indicates CATS can provide contextual information useful for disambiguating the causes of deviations or unusual control actions that arise in incident or accidents. Discoveries made using CATS can be incorporated into training curricula by connecting a CATS-based training system to a simulator and allowing pilots to fly under conditions that correspond the actual context of an error-related event. Such capabilities are also useful outside the airline arena as they support both fine-grained cognitive engineering analyses and human performance modeling research.
Using CATS with flight data collected at continuous rates results in better performance. Event-based data, such as those available from the NASA ACFS, require more complicated interpolation methods to avoid temporal gaps in the CATS representation of context that can adversely affect CATS performance. Important directions for further research involve improving the coverage of flight data to include the FMS and CDUs, as well as work on methods to automatically acquire ATC clearance information. This research indicates that, if CATS has access to data with full, high-fidelity coverage of the controlled system displays and controls, it can expose the contextual nuances that surround errors in considerable detail.
Acknowledgements
This work was funded under the System Wide Accident Prevention element of the FAA/NASA Aviation Safety Program. Thanks to the NASA Langley B757 Flight Test team for their assistance with data collection.
References
Callantine, T. (2001a). Agents for analysis and design of complex systems. Proceedings of the 2001 International Conference on Systems, Man, and Cybernetics, October, 567-573.
Callantine, T. (2001b). The crew activity tracking system: Leveraging flight data for aiding, training, and analysis. Proceedings of the 20th Digital Avionics Systems Conference, 5.C.3-15.C.3-12 (CD-ROM).
Callantine, T. (2002a). Activity tracking for pilot error detection from flight data. NASA Contractor Report 2002-211406, Moffett Field, CA: NASA Ames Research Center.
Callantine, T. (2002b). A representation of air traffic control clearance constraints for intelligent agents. Proceedings of the 2002 IEEE International Conference on Systems, Man, and Cybernetics, Hammamet, Tunisia, October.
Callantine, T., and Mitchell, C. (1994). A methodology and architecture for understanding how operators select and use modes of automation in complex systems. Proceedings of the 1994 IEEE Conference on Systems, Man, and Cybernetics, 1751-1756.
Callantine, T., Mitchell, C., and Palmer, E. (1999). GT-CATS: Tracking operator activities in complex systems. NASA Technical Memorandum 208788, Moffett Field, CA: NASA Ames Research Center.
Fields, R., Harrison, M., and Wright, P. (1997). THEA: Human error analysis for requirements definition. Technical Report 2941997, York, UK: University of York Computer Science Department.
Fields, R., Wright, P., and Harrison, M. (1996). Temporal aspects of usability: Time, tasks and errors. SIGCHI Bulletin 28(2).
Johnson, C. (2000). Novel computational techniques for incident reporting. In D. Aha & R. Weber (Eds.), Intelligent Lessons Learned Systems: Papers from the 2000 Workshop (Technical Report WS-00-03), Menlo Park, CA: AAAI Press, 20-24.
Sarter, N., and Woods, D. (1995). How in the world did we ever get into that mode? Mode error and awareness in supervisory control. Human Factors, 31(1), 5-19.
Vicente, K. (1999). Cognitive work analysis: Toward safe, productive, and healthy computer-based work. Mahwah, NJ: Lawrence Erlbaum Associates.
.
Development and Preliminary Validation of a Cognitive Model of Commercial Airline Pilot Threat Management Behaviour
Simon Banbury1, Helen Dudfield2 & Mike Lodge3
1School of Psychology, Cardiff University, Cardiff, CF5 1LP, UK. Email: banburys@cardiff.ac.uk
2QinetiQ, Farnborough / 3British Airways.
Abstract: The present study extends previous models of threat management (e.g. Helmreich et al., 1999) by taking a cognitive approach to understand how pilots behave during threat situations. The CAPT-M model was developed by a group of psychologists and airline training captains to describe the behaviour of commercial airline pilots engaging in effective threat management. The present study attempted to identify the core behaviours that pilots use to manage threat situations successfully, and examine how important these behaviours are perceived to be in terms of both their importance to managing threat situations and their potential for training. These data were then used to validate the CAPT-M model. The present study used a variety of user-consultation techniques, such as group discussion and questionnaire-based methods on 34 commercial airline pilots. The findings revealed tentative support for the structure and content (i.e. component stages) of the CAPT-M model. Specifically, participants rated situation assessment and re-assessment to be the most important components of the model, as were having clearly defined goal and situation models. Theoretical considerations and practical implications are discussed.
Keywords: threat management / pilot cognition.
Introduction
The present study was conducted as part of European Commission Framework V funded project Enhanced Safety through Situation Awareness Integration in Training (ESSAI). The consortium partners consist of QinetiQ, NLR, DLR, Dedale, Thales, British Airways, Aero-Lloyd, Alitalia and the University of Berlin. The ESSAI project seeks to address problems that occur in commercial flight when pilots are confronted with non-normal and emergency situations, or threats, for which they do not have the appropriate procedures. These threats may occur because of lack of Situation Awareness (SA) on the part of the crew (there are procedures, but they do not recognise the situation), or may be the result of an unusual chain of events. A potential solution is thought to consist of enhanced training to provide strategies for effective threat management during non-normal or emergency flight operations.
Although a threat can be defined as either expected, such as terrain or predicted weather, or unexpected, such as ATC instructions or system malfunctions (Helmreich, Klinect and Wilhelm, 1999), we advocate that all threats are usually manifest in two ways; firstly, that there are no well-defined procedures that exist to resolve the threat; and secondly, that even if a solution is found, its outcome is uncertain. Clearly, a strategy to manage a threat would be to seek an understanding of the event so that a solution can be found. Once this understanding is reached and the actions for resolving the situation are in place, the event is no longer termed a threat. This is consistent with strategies used in other domains to manage crises. For example, the first course of action by medical staff with patients with severe trauma is to stabilise the patient and assess the problem. Only when this has been achieved will remedial action be undertaken. Once again, the event ceases to become a threat when an understanding of the situation is reached and a successful course of action is instigated.
Situation assessment, or the process of acquiring and maintaining Situation Awareness (for a review see Endlsey, 1995), is an important step in threat management because it provides a state of awareness of the event, and a starting point for any decision-making undertaken to resolve it. Clearly, accurate situation assessment will lead to more effective threat management. Furthermore, the level of understanding of the event will dictate the behaviour that follows. Rasmussens (1983) model of human performance provides a convenient framework to describe such behaviour during a threat situation. This framework assumes that behaviour can be represented at three levels; a skill-based level that utilises automated sensori-motor patterns that have been built up with practise; a rule-based level that operates on a recognition basis where the behaviours for known tasks are retrieved as required; and finally a knowledge-based level for which no know-how rules are available from previous encounters, necessitating a higher conceptual level of control where goals and strategies are explicitly considered.
The very nature of a threat situation dictates that skill and rule-based behaviours are impossible given that the pilot has not encountered the situation before and there may not be formal procedures that can be used to deal with it. These limitations imply that pilots have to manage and solve the threat event through their own abilities using a knowledge-based strategy. An important consideration of these types of behaviour is the amount of cognitive effort required. On one hand skill-based behaviours rely heavily on highly practised, automatic processing, requiring little cognitive effort. On the other hand, knowledge-based behaviours require significant levels of cognitive effort in order to evaluate goals and decision options. This is rather unfortunate given that uncertain information, time pressure, high workload and high levels of tension and anxiety often typify threat situations.
A Cognitive Model of Commercial Airline Pilot Threat Management (CAPT-M): In order to develop a training package to enhance the effectiveness of threat management, it was first necessary to gain an understanding of how pilots should ideally manage threat situations. A small working group of psychologists and airline training captains was assembled to develop a cognitive model to describe the behaviour of pilots engaging in effective threat management. This model was based on current training practices, anecdotal evidence (e.g. accident and incident case studies and personal experiences), theories of human cognition (e.g. situation awareness, decision making), and descriptive models of threat management (Helmreich, Klinect and Wilhelm 1999).
Figure 1 A model of Commercial Airline Pilot Threat Management (CAPT-M) behaviour
The CAPT-M model was proposed to describe the relationship between Threat Management and Situation Awareness, and is based on the OODA (Observation, Orientation, Decision, Action) loop (Fadok, Boyd and Warden, 1995). The model also extends research by Trollip and Jensen (1991) on the cognitive judgement of pilots. They suggest that pilots utilise a eight-step process to solve problems on the flightdeck: vigilance, problem discovery, problem diagnosis, alternative generation, risk analysis, background problem (e.g. incidental factors), decision and action. The central tenets of the model are that the process is cyclical and adaptive (Neisser, 1976), and that the precursor to any decision-making behaviour is Situation Assessment (Endsley, 1995; Prince and Salas, 1997). The model is presented in Figure 1.
After the onset of an unexpected event in the environment, the process of Situation Assessment occurs. The result of which is a Situation Model comprising of the perceived state of the situation (i.e. Situation Awareness) and a Goal Model, the desired state of the situation as determined by procedures and doctrines (e.g. maintaining the safety of the aircraft and passengers). Workload, stress, time pressure and uncertainty mediate the quality of the Situation and Goal Models.
A comparison is then made between the Situation Model and the Goal Model to determine the extent of the threat. The level of discrepancy between the two also dictates the amount of intervention that is required to reach the Goal. In other words, no discrepancy means that the current course of action will reach the goal, whilst a large discrepancy indicates that intervention is required to ensure that the goal is reached. It is assumed that when an unexpected event occurs in the environment, and is accurately assessed by the pilot, its effect would be to cause a large discrepancy between the perceived and desired state. This is concept is similar to Finnie and Taylors (1998) IMPACT (Integrated Model of Perceived Awareness ConTrol) model which argues that the acquisition and maintenance of SA is derived from behaviour directed to reduce the mis-match between the perceived level of SA and the desired level of SA.
The problem (i.e. the intervention needed to resolve the discrepancy between desired and actual state) is now represented in memory and is compared to existing schema in long term memory. If the pilot feels that he or she has insufficient information to form a Situation Model and/or Goal Model, further Situation Assessment is then undertaken.
Experienced decision-makers working under time pressure report that they use recognition-based rather than evaluation-based decision making strategies; acting and reacting on the basis of prior experience rather than comparing decision options through formal or statistical methods. Recognition-primed decision making fuses two processes; situation assessment and mental simulation. Situation assessment generates a plausible plan of action, which is then evaluated by mental simulation (Klein, 1993). In line with Klein, the model proposes that schema attempt to fit expected perceptions and are fine-tuned by experience, in both a bottom-up and top-down fashion. In bottom-up processing, information of perceived events is mapped against existing schema on the principle of best fit. Whereas in top-down processing, anomalies of fit are resolved by the fine-tuning of the evoked schema in the light of perceived evidence, or by initiating searches to fit the newly changed schema structure (Klein, 1993).
As the event is only classed as a threat if there are no well-defined procedures, clearly no schema will exist to assist the pilot in resolving the threat situation (i.e. bottom-up). With no match to schema in memory, the pilot is faced with producing a bespoke Crisis Plan in order to stabilise the situation (i.e. top-down). However, although they may share surface features with the problem, they may not be appropriate to use. A course of action is decided and then acted upon. Once again, the situation is re-assessed and the resultant Situation Model is compared to the desired goal. This process is repeated until the bespoke crisis plan creates an intervention that does have a schema-match in memory. At this point, the event ceases to be a threat and can be managed using existing procedures, or Resolution Plans. The difference between Crisis and Resolution plans can be couched in terms of Rasmussens model of human performance. On the one hand, Resolution plans operate on a rule-based, recognition basis where behaviours for known tasks are retrieved as required. On the other hand, Crisis plans operate on a knowledge-based level for which no know-how rules are available from previous encounters, necessitating a higher conceptual level of control where goals and strategies are explicitly considered.
The Present Study: The present study attempted to identify the core behaviours that pilots use to manage threat situations successfully, and examine how important these behaviours are perceived to be in terms of both their usefulness to threat management and their importance to training. The present study used a variety of user-consultation techniques, such as group discussion and questionnaire-based methods on 34 commercial airline pilots. Participants were asked to respond to questions relating to the strategies that they use to manage threat situations. These responses were used to help validate the proposed model of Threat Management (CAPT-M). In addition, a number of demographic measures were taken.
Method
For the purposes of this study, a formal definition of the term threat was used. Preliminary studies had shown little consensus between pilots when asked to define a threat. To ensure consistency of responses between the participants in this study the following definition was used; a threat is an unexpected and potentially life-threatening chain or combination of events, causing uncertainty of action and time-pressure. This can range from a hazardous situation to a major crisis.
A group discussion between all participants was held at the beginning of the experimental session to clarify what is meant by threat and threat management. Participants were then asked to answer the questionnaires in light of a situation they had experienced which was representative of this agreed definition.
Demographics: Participants were asked to give the following information:
Flying Hours Specifically, the number of commercial, military and private flying hours.
Types of Aircraft flown Specifically, the aircraft model and type, and the number of hours (both as Captain and First Officer) they had flown in each. From these data, the number of hours flown in glass, hybrid and steam cockpits were calculated.
Validating the Threat Management model: Participants were asked to answer questions relating to threat management strategies they have used in the past to manage threat events. They were asked to indicate their agreement or disagreement with a number of statements (see below) in light of a recent event they had encountered before that was representative of our definition of a threat. The scale was constructed in a 5 point Likert format: Strongly Disagree, Disagree, Neutral, Agree and Strongly Agree. Participants responses to these questions were also used to validate a model of threat management. Unbeknownst to the participants, these questions mapped directly on to the components of the threat management model. Thus, participant responses to each of these statements were used as evidence for, or against, the stages of the proposed model of threat management.
Stage of ModelStatementSituation AssessmentIt is important to make an assessment of the current state of the situation.Situation Model (or perceived state)It is important to hold a representation of the current state of the situation in my mind.Goal Model (or desired state)It is important to hold of representation of my current goal or goals in my mind.
Comparison of Goal and Situation ModelIt is important to reflect on the differences between where I am now and where I want to be.Schema in LTM It is important to compare my perception of the current situation with past experiences.Resolution PlanIt is important to take a course of action that I have encountered before, rather than embark on an unknown one.Crisis PlanIt is important to take a course of action that I have not encountered before, rather than do nothing.Action ScriptsIt is important to formalise the details of the action before instigating them.IterationsIt is important to re-assess the current situation once any action has been performed.
Skills important to Threat Management: In addition, participants were asked to rate their agreement with a number of skills (see below) for how important they are to threat management. Once again, the scale was constructed in a 5 point Likert format: Strongly Disagree, Disagree, Neutral, Agree and Strongly Agree.
1. Situation Assessment
2. Risk Assessment
3. Risk Taking
4. Experience
5. Team-work
6. Inter-personal communication
7. Leadership
8. Communication
9. Checklist Management
10. Systems Knowledge
11. Task Management
12. Attention Management
13. Aircraft Energy Awareness
14. Option generation
15. Option selection
Trainability of Threat Management skills: Finally, participants were asked to rate their agreement with a number of skills that could be trained to improve threat management (see below). Once again, the scale was constructed in a 5 point Likert format: Strongly Disagree, Disagree, Neutral, Agree and Strongly Agree.
1. Situation Assessment
2. Risk Assessment
3. Team-work
4. Verbal Communication
5. Leadership
6. Non-verbal Communication
7. Checklist Management
8. Systems Knowledge
9. Task Management
10. Attention Management
11. Aircraft Energy Awareness
12. Option generation
13. Option selection
14. Task Prioritisation
15. Workload Management
Results
Demographics: Participants were 34 flight-crew employees of a major international airline (33 male and 1 female). Of the sample, 18 were Captains, 10 were First Officers and 6 were Flight Engineers. All participants were English speaking, British nationals.
The number of hours flown as Captain, First Officer and Flight Engineer were as follows:
Position Total Hours of SampleMean Hours of Sample Captain823204573 (3495.36)First Officer1567306269 (2599.79)Flight Engineer595809930 (4108.21)(Standard Deviations in brackets)
The amount of hours flown in steam, hybrid and glass cockpits were as follows:
Cockpit Total Hours of SampleMean Hours of Sample Steam2371357411 (4790.81)Hybrid110501842 (867.42) Glass412202576 (1981.71)(Standard Deviations in brackets)
The flying hours of the participants were as follows:
TypeTotal Hours of SampleMean Hours of Sample Commercial2936508637 (3961.05)Military9303274 (762.86) Private22152651 (955.60)(Standard Deviations in brackets)
Validating the Threat Management model: Participants were asked to indicate their agreement or disagreement with a number of statements relating to threat management, specifically in the context of a recent threat event. Unbeknownst to the participants, these questions mapped directly on to the components of the threat management model. Participant responses to the statements (and stage of the model they represent) are presented below. The scoring of these items were SD=1, D=2, N=3, A=4, SA=5.
Stage of ModelStatementMean Standard DeviationSituation AssessmentI made an assessment of the current state of the situation4.30.95Situation Model
(or perceived state)I consciously thought through the current state of the situation3.90.93Goal
(or desired state)I held a representation of my current goal or goals in my mind4.00.69Comparison of Goal and Situation ModelIt is important that I reflected on the differences between where I was and where I wanted to be3.90.80Schema in LTMI compared my perception of the current situation with past experiences of similar situations3.81.02Resolution PlanIt was important that I took a course of action that I had encountered before, rather than embarked on an unknown one2.81.04Crisis PlanIt was important that I took a course of action that I hadnt encountered before, rather than doing nothing3.21.19Action ScriptsI formalised the details of the action before I instigated them3.21.00IterationsI re-assessed the current situation once any action had been performed4.30.68
A one-way within-subjects analysis of variance showed significant differences between the ratings for the nine statements, F(8,305)=10.87, p