Completed Projects

There is growing evidence that clinicians’ workload adversely affects clinical performance and contributes to adverse events and lower quality of care. Unfortunately, current methods of measuring clinical workload are crude (e.g., nurse-patient staffing ratios), retrospective (based on the volume of work units performed), and do not apply to the unique model of the one-patient-at-a-time care performed in the operating room (OR). To address this problem, colleagues at Vanderbilt University and the Tennessee Valley VAMC have piloted an instrument, the Quality and Workload Assessment Tool (QWAT), to measure the perceived clinical workload of individual nurses, surgeons, and anesthesia providers, as well as that of the surgical team as a whole. The QWAT also elicits data about intraoperative “non-routine” events (or NRE). NRE represent deviations from optimal care and thus may be a measure of quality of care. A conceptual model has been developed suggesting that performance-shaping factors (e.g., clinician fatigue or stress) contribute to clinical workload that, in turn, affects patient outcomes. In this multi-center project, we are testing this conceptualization of surgical clinical workload and its mediating effects on intra-operative quality of care and patient safety.

This project will contribute significantly to our understanding of the factors affecting the conduct and quality of surgical care and be an initial step toward toward identifying early warning signs of suboptimal and unsafe processes. These results will provide a more rational basis for improving working conditions, clinician training and staffing, care processes, and technology design.

New technologies are expanding the amount of information available to health care practitioners. In the perioperative environment, this includes an increase in patient-monitoring data. The dynamic nature of the perioperative environment makes it especially susceptible to problems of information overload. There is a need for a holistic and human centered approach in the analysis and redesign of perioperative information displays. The main hypothesis for this research is that the application of human factors design principles and the use of a human centered design process will lead to the design of perioperative information displays that improve patient care compared to current systems.

This research project involves four main components:



  1. Identification of human factors design principles based on contemporary theories of human decision making, situation awareness, and teamwork that are relevant to the dynamic, mobile, risky, team-based, and information-rich perioperative environment.
  2. The application of cognitive task analyses and knowledge elicitation methods to identify the important information requirements for the perioperative environment.
  3. The design of perioperative information management systems using a human-centered approach that includes a process of iterative user evaluation and redesign.
  4. Comparison of the new designs with conventional perioperative information displays under anesthesia crisis management scenarios using a human patient simulator.

In addition to the potential to improve patient safety through better information management in the perioperative environment, the results of this effort will have implications for:

  1. Training in the perioperative environment.
  2. System design in other dynamic, safety-critical health care environments.

Funding: NIH Agency for Healthcare Research and Quality.

One major limitation in the use of human patient simulators in anesthesiology training and assessment is a lack of objective, validated measures of human performance. Objective measures are necessary if simulators are to be used to evaluate the skills and training of anesthesia providers and teams or to evaluate the impact of new processes or equipment design on overall system performance. There are two main goals of this project. The first goal is to quantitatively compare objective measures of anesthesia provider performance with regard to their sensitivity to both provider experience and simulated anesthesia case difficulty. We are comparing previously validated measures of anesthesia provider performance to two objective measures that are fairly novel to the environment of anesthesia care: an objective measure of provider situation awareness and a measure of provider eye scan patterns. The second goal of this project is to qualitatively evaluate the situation awareness and eye tracking data to identify key determinants of expertise in anesthesia providers. These determinants of expertise may then be used to further enhance objective measures of performance as assessment tools and to inform training of anesthesia providers.

Funding: Anesthesia Patient Safety Foundation and the recipient of the Ellison C. Pierce, Jr. Education Research Award.

Effective team coordination is critical for the safe delivery of care. Development of these skills requires training and practice in an interactive team-based environment. A three-dimensional serious game environment provides an engaging and cost effective alternative to other interactive training solutions such as human patient simulation. In this project we are developing a three-dimensional interactive networked system for training of military health care team coordination skills. Although this is a development project with no formal experimental hypothesis, we will conduct qualitative evaluations of the design specification and alpha and beta versions of a demonstration prototype. Ease of use, practicality, scope, and effectiveness of the training system will be assessed through heuristic analysis by experts in team training.

Our specific aims include:

  • Development of an immersive environment software platform for training of health care skills.
  • Content development for the training of team coordination skills.
  • Prototype of the 3D immersive environment using a military trauma scenario for the purposes of proof-of-concept.
  • Planning for an assessment of ease of use and efficacy through an experimental trial at Duke University Medical Center.

The development of 3DiMD will provide an effective solution to the problem of expanding the scope of team coordination skill training in military health care environments. In addition, the software platform we develop will allow for the integration of multiple scenarios and work environments (e.g., training modules) to allow expansion into public health care environments.

Funding: Telemedicine and Advanced Technology Research Center, US Army Medical and Materiel Command.

Research suggests that training of team coordination skills will be most effective when it incorporates opportunities for interactive practice of those skills in realistic work environments. There is little experimental evidence assessing and comparing the impact of different interactive training approaches with respect to improving the team coordination skills of participants. A multidisciplinary team of researchers at Duke University Medical Center, in collaboration with University of North Carolina Health Care, is developing several simulation approaches toward interactive training of health care team coordination. In addition, Duke University Medical Center and Virtual Heroes, Inc. are developing a 3D-interactive networked virtual reality team training tool (3DiTeams).

The primary objective of this project is to assess methods of interactive team training in order to design and implement a health care team training program that is:

  1. Cost effective.
  2. Feasibly implemented in clinical work and professional health care education environments.
  3. Has substantial impact on the team coordination skills of trainees.

Specific aims include:

  1. Pilot testing of 3DiTeams as an alternative to traditional interactive team training.
  2. Experimental comparison of participants’ improvement in team knowledge and behaviors following training using 3DiTeams and an alternative interactive team training approach (e.g., high fidelity patient simulation).
  3. Evaluation of the resulting experimental data along with realistic cost estimates to design and pilot test an efficient and effective team training program within Duke University Health System.

This research has the potential to significantly advance the delivery and distribution of effective team coordination training. Resulting experimental evidence should assist health care organizations in choosing or developing methods of training health care team skills. This research will also provide information needed to support a long term goal of developing health care team training that will be exportable beyond Duke. More importantly, the improvements in health care team training that result from this research are expected to have a broader impact on public health through the reduction of health care adverse events and enhancement of patient safety.

Funding: NIH Agency for Healthcare Research and Quality.

Clinical trials play an important role in the advancement of medical care. Over $ 6 billion are spent annually on clinical research. Clinical protocols are increasingly complex. Data inaccuracy in the early phase of a new research trial is a commonly known, but incompletely described component of clinical research. These errors are likely a result of coordinators lacking mastery of the knowledge, skills, and attitudes needed to properly conduct the research protocol. Most historical studies on research integrity consider ethical issues associated with clinical trials and protocol design. Little attention has been paid to issues of data integrity and patient safety in the proper conduct of a trial. Modern theory stresses the importance of interactivity in learning. Simulation is considered a top methodology for learning complex behaviors. The use of high-fidelity patient simulation in clinical research training is expected to improve the coordinator’s acquisition of the knowledge, skills, and behaviors needed to properly perform a protocol. Enhanced coordinator performance is expected to lead to improved data integrity and heightened patient safety. Recent efforts in our laboratory demonstrate marked improvement in coordinator confidence following high-fidelity simulation training (in their ability to properly conduct the trial). Objective measures of coordinator competence are now needed to properly assess the effects of simulation training on research integrity. One objective method of assessing coordinator competence is through queries of errors in the protocol’s data record. The goal of this study was to first define a taxonomy to categorize and quantify errors rates in a reproducible fashion. The taxonomy was then used to define error rates and the learning curve in a recently completed multi-center trial. This project provides the groundwork for future studies investigating the impact of high-fidelity simulation on the competent performance of a clinical trial.

Funding: NIH Office of Research Integrity / National Institute of Neurologic Disorders and Stroke.

As part of her doctoral dissertation, Dr. Noa Segall (under the direction of Dr. David Kaber, North Carolina State University, and guidance of Dr. Wright and Dr. Taekman in the Duke Human Simulation and Patient Safety Center) developed a computerized system for detection, diagnosis, and treatment of perioperative myocardial ischemia and infarction (MI).

The development approach involved: 



  1. Performing a hierarchical task analysis to identify anesthetist procedures in detecting, diagnosing and treating MI.
  2. Carrying out a goal-directed task analysis to elicit goals, decisions, and information requirements of anesthetists during this crisis management procedure.
  3. Coding the information collected in the task analyses using a computational cognitive model.
  4. Prototyping an interface to present output from the cognitive model using ecological interface design principles.

Validation of the decision support tool involved subjective evaluations of the tool and its interface design through an applicability assessment and a usability inspection. For the applicability assessment, three expert anesthesiologists were recruited to observe the tool perform during two hypothetical scenarios, hypotension and MI. They provided feedback on the clinical accuracy of the information presented and all three experts indicated that, further refined, they would use the tool in the operating room. Heuristic evaluation was employed to inspect the usability of the interface. Two usability experts and the three anesthesiologists were asked to identify human-computer interaction design heuristics that were violated in the interface and to describe the problems identified. The reviewers commented on the use of fonts and colors, medical terminology, organization of information, and more. Future efforts by our research team will incorporate the use of more flexible interface design software to develop user interfaces with a wider range of presentation and interaction methods.

Funding: Department of Anesthesiology, Duke University Medical Center; Department of Industrial Engineering, North Carolina State University

Training of healthcare research personnel is a critical component of quality assurance in clinical trials. Interactivity (such as simulation) is desirable compared to traditional methods of teaching. We studied subjective assessments of confidence for clinical research coordinators following interactive simulation as a supplement to standard training methods. Our initial evaluations revealed that ratings of confidence increased significantly after the simulation exercise compared to pre-exercise ratings. Significant improvements were found along all three dimensions of Bloom’s Taxonomy including affective, psychomotor, and cognitive scales. We suggest that simulation exercises should be considered when training study coordinators for complex clinical research trials.

Funding: Department of Anesthesiology, Duke University Medical Center.

One major limitation in the use of human patient simulators in anesthesiology training and assessment is a lack of objective, validated measures of human performance. Objective measures are necessary if simulators are to be used to evaluate the skills and training of anesthesia providers and teams or to evaluate the impact of new processes or equipment design on overall system performance. There are two main goals of this project. The first goal is to quantitatively compare objective measures of anesthesia provider performance with regard to their sensitivity to both provider experience and simulated anesthesia case difficulty. We will compare previously validated measures of anesthesia provider performance to two objective measures that are fairly novel to the environment of anesthesia care: an objective measure of provider situation awareness and a measure of provider eye scan patterns. The second goal of this project is to qualitatively evaluate the situation awareness and eye tracking data to identify key determinants of expertise in anesthesia providers. These determinants of expertise may then be used to further enhance objective measures of performance as assessment tools and to inform training of anesthesia providers. This project is funded by the Anesthesia Patient Safety Foundation and is the recipient of the Ellison C. Pierce, Jr. Education Research Award. http://www.apsf.org/grants/recipients.mspx

Training of healthcare research personnel is a critical component of quality assurance in clinical trials. Interactivity (such as simulation) is desirable compared to traditional methods of teaching. We are studying subjective assessments of confidence for clinical research coordinators following interactive simulation as a supplement to standard training methods. Our initial evaluations revealed that ratings of confidence increased significantly after the simulation exercise compared to pre-exercise ratings. Significant improvements were found along all three dimensions of Bloom’s Taxonomy including affective, psychomotor, and cognitive scales. We suggest that simulation exercises should be considered when training study coordinators for complex clinical research trials.

This project, funded by the Anesthesia Patient Safety Foundation (APSF), evaluates perioperative data for effects of time of day and surgery duration on the incidence of anesthetic adverse events (AEs). While the effects of fatigue on clinical performance are measurable, these decrements in performance have not been clearly linked to adverse clinical outcomes for patients. Potential risk factors for anesthetic mishaps that may be associated with fatigue include the time of day that surgery takes place and the duration of surgery. Data from a perioperative “Quality Improvement” (QI) database used by the Duke University Medical Center containing details from over 86,000 surgical procedures is being coded and analyzed for time of day and surgery duration effects. The project involves review of the QI event labels and associated free text by Anesthesiologists to categorize QI documentation into specific types of Adverse Events. Additional risk factors for AEs such as patient characteristics and surgical complexity will be included in the analysis as covariates. 
APSF — http://www.gasnet.org/societies/apsf/grants/current.php

This project, funded by the National Board of Medical Examiners (NBME) seeks to evaluate assessment tools used in other team performance contexts for the measurement of medical student teamwork skills within a small group cooperative learning environment and in a simulated patient care environment. Researchers in the health care industry are increasingly aware of the importance of teamwork skills and advocate a wide variety of training programs related to team coordination. However, these programs tend to be focused toward specialty and continuing education. While the assessment of medical students has covered areas such as interpersonal and communication skills, these assessments generally focus on the student’s interaction with the patient and do not assess team skills in relation to working with other health care providers.

In this research, we hope to answer the following questions:

  1. Will assessment tools used in other team performance contexts adequately assess individual medical student team skills?
  2. Can these skills be assessed in naturally occurring team learning environments?
  3. Do the results of the teamwork skills assessments reflect actual team performance or outcome in scenarios using a human patient simulator?

Assessment measures to be evaluated include self rating of team skills, peer rating of team skills, observer ratings of team skills, and analysis of communication content. We will compare results of measures in the small group environment with measures in a simulated team patient care exercise. We will also investigate the relationship between individual team skill assessment measures and objective measures of team performance in the simulated scenario to determine whether the skill assessment measures are predictive of actual care performance.
 NBME — http://www.nbme.org/research/stemmler2003_2004.asp

The Duke Human Simulation and Patient Safety Center has been involved in usability testing of medical equipment devices such as infusion pumps. Ongoing efforts seek to assess the usability of medical devices under stressful situations (such as high time pressure) that may be simulated through the use of a human patient simulator.

Recent HSPSC Posts

Drs Udani and De Gagne

Duke AHEAD Grant Awarded

September 27, 2016

Have an Application for Simulation?

Contact Us

Email: simulation@duke.edu
Phone: 919-684-3661
Fax: 919-684-6251

Duke Human Simulation and Patient Safety Center
8 Searle Center Dr.
Suite 5010, 5th Floor, Trent Semans Center
DUMC Box 3094
Durham, NC 27710

Chris KeithCompleted Projects