Patient Safety Tip of the Week

August 7, 2007    

Role of Maintenance in Incidents

 

 

Many of the most famous disasters in industry history have followed equipment or facilities maintenance activities, whether planned, routine, or problem-oriented. Well-known examples include Chernobyl, Three-Mile Island, the Bhopal chemical release, and a variety of airline incidents and oil/gas explosions. It is unknown how often maintenance activities contribute to medical incidents but, given the similarity of systems in medicine to those in other high-risk industries, it is likely that there are many cases in which maintenance errors contribute to adverse patient outcomes.

 

 

James Reason and Alan Hobbs in their 2003 book “Managing Maintenance Error. A Practical Guide” do an outstanding job of describing the types of errors encountered in maintenance activities, where and under what circumstances the various types of error are likely to occur, and steps to minimize the risks.

 

 

Omissions are the single largest category of maintenance errors. Omissions are especially likely to occur after interruptions or distractions. We frequently “lose our place” when performing a series of actions and then either unnecessarily repeat a step or omit a step or steps altogether. Omissions are also especially prone to occur near the end of a sequence of steps. This may be part of the “premature exit” phenomenon where one is already thinking about the next activity and leaves out a step in the current activity.

 

 

The book has an excellent discussion of error-provoking factors that come into play at both the individual level and the team level, including how to recognize them and how to deal with them. It has a particularly good discussion of violations (which are intentional deviations from standards) and the reasons for them and a useful approach that one company took to reduce them.

 

 

Don Norman’s work on design of systems is cited and the importance of involving end-users in the equipment purchasing phase is emphasized. This, of course, helps the organization help identify some of the safety issues that will arise with such new equipment. Hospitals and healthcare facilities need to adhere to that principle more often. They especially stress the end-user role in understanding equipment having multiple modes (that is, controls do different things depending upon what “mode” the machine is in), another issue frequently lacking in healthcare settings. “Automation surprises” (such as this mode confusion issue) are frequently mentioned as root causes in the aviation safety literature but probably occur as often in healthcare.

 

 

A section on omission-provoking features more than justifies buying this book. It includes an annotated “task step checklist” that will help your organization identify omission-prone tasks and better manage them. It also includes a discussion of the characteristics of a good reminder (many of these characteristics are incorporated into the ISMP guideline on good lablels for high-alert drugs that we discussed in last week’s Tip of the Week).

 

 

In healthcare, we talk about the importance of reporting near-misses and other issues proactively to help prevent errors and adverse outcomes. We usually stress the importance of developing anonymous reporting systems or error-reporting hotlines. This book describes and interesting method probably not widely used in healthcare. It describes the MESH (Managing Engineering Safety Health) system. MESH is a sampling tool given to randomly selected frontline workers to rate weekly (or monthly) a number of factors affecting the local workplace environment of the more general organizational environment. The resulting cumulated local factor profile allows identification of those factors occurring with sufficient frequency to help direct limited resources to areas in which ROI is likely to be high and help prioritize safety and quality goals. This book also has a good description of “just culture” (encouraging that the vast majority of errors reported are not punished but continuing to take action in the rare cases where reckless behavior occurred) and a very good description of the attributes of a successful reporting system. And it ends with a great discussion about the nature of “safety culture” and the “resilient” organization..

 

 

The book also has the overview of human factors, description of various human error types, and models of organizational accidents that you’d expect of any James Reason book. It gives real-life examples of incidents from several industries. Many of our “big three” issues (failed handoffs or other communication failures, failure to buck the authority gradient, and failure to heed alarms) are contributing factors in those examples. But they really emphasize some other aspects that we think about less often in healthcare but clearly need to integrate into our thinking.

 

 

Quite frankly, most of the lessons in this book apply not just to maintenance activities but to any process or procedure involving multiple steps. The caveat that steps near the end of a maintenance procedure are most likely to be omitted (“premature exit”) or that violations tend to occur frequently when under time pressure to complete a task certainly applies to many things we do in healthcare, not just equipment maintenance. The same issues could just as easily apply to a surgical case in the operating room or the delivery of chemotherapy on a medical unit.

 

 

References:

 

 

Reason J, Hobbs A. Managing Maintenance Error. Aldershot, England: Ashgate Publishing Limited, 2003

 

 

Norman DA, The Design of Everyday Things. New York: Doubleday; 1989

(in paperback by Basic Books 2002)

 

 

Norman DA, The Design of Future Things. New York: Basic Books; 2007

 

 

 

 

 


 


http://www.patientsafetysolutions.com

 

Home

 

Patient Safety Tip of the Week Archive

 

What’s New in the Patient Safety World Archive