Time for another of our traditional holiday book reviews. Many of you involved in patient safety are familiar with some of Sidney Dekker’s previous books “The Field Guide to Human Error Investigations” (2002) and “The Field Guide to Understanding Human Error” (2006). Though much of his work has been in the aviation industry, Dekker’s work in human factors and cognitive systems engineering is directly applicable to medicine and patient safety (see our April 2007 What’s New in the Patient Safety World column “New Sidney Dekker Book”).
Now he has actually written a book that focuses on patient safety from a human factors perspective: Patient Safety. A Human Factors Approach.
Dekker begins by discussions about our push for “perfection” in medicine and other healthcare professions, leading to focus on individuals rather than systems. He talks then of the bureaucraticization that can actually further perpetuate the focus on the individual, not only in healthcare but in other industries as well. There is a sort of “heroism” that comes from being able to get the job done despite all the barriers put in place by the system.
He then goes on to discsuss a variety of human factors concepts, including hindsight bias, counterfactuals, outcome bias, the local rationality principle, satisficing, conflicting goals and efficiency/safety tradeoffs, and coginitive concepts such as schemata. His discussion on focusing of attention is excellent. At one end of a continuum you might see cognitive fixation (where controverting evidence is ignored). At the other end is thematic vagabonding (where one jumps to new ideas with each new clue). The discussions on prospective memory (“remembering to remember”), distractions and interruptions, mental models and heuristics are very helpful in understanding some aspects of human behavior.
Then he goes into the most important concepts about how the environment and conditions in the system interact with humans. He has a great chapter on how new technologies change the system. Not only might they produce some improvements they intended to, but they may have numerous unintended consequences, including data overload, automation surprises, tighter coupling, new roles and new interpersonal conflicts, and redistribution of workloads. The latter include redistribution of worload among workers and redistribution in time (such that increased cognitive workloads may be required during rapidly evolving situations because of the technology).
He gives a great history of the evolution of human factors approaches to safety and accidents. You’ll recognize many of the names from many of our prior columns (Reason, Rasmussen, Hollnagel, Klein, Perow, et al.). He moves from Turner’s man-made disaster theory to Reason’s swiss cheese model, to Perow’s normal accident theory (where interactive complexity and tight coupling may lead to accidents even during “normal” operations). With each new “disaster” theories and models get redefined. The Challenger disaster, in particular, brought to light the importance of concepts like “normalization of deviance” where successes despite problems lead to the acceptance of such problems as “normal” and therefore tolerable. These are followed by extensive discussions on control theory and high reliability organizations.
Perhaps the most interesting contribution is Chapter 6 Practical Tools for Creating Safety. This chapter contains many lessons that we tend to overlook in our own patient safety endeavors. First he talks about event reporting systems. We’ve always talked about the need for voluntary reporting systems that are nonpunitive so that appropriate events and near-misses get reported so that we may learn from them. He reiterates the need for protected reporting systems but notes they should be confidential rather than anonymous. If anonymous, one cannot contact the reporter for details or apprise the reporter of actions taken as a result of the report. He also notes that anonymous reporting systems have a tendency to receive many reports that are “vitriol” or bickering that just clog up the system with senseless items that lead to no learning. In a confidential system, the name of the reporter and any other identifying items get separated from the narrative.
He strongly stresses that the learning potential of reporting systems lies in the narratives and he does not like systems that pigeon-hole reports into categories. The latter tend to get reported out in bar graphs or pie charts as meaningless statistics that do not lead to organizational learning but may provide false impressions of “improvement” or lead to a false sense that the system is “safe”. Frequent readers of our columns have often heard us talk about “stories, not statistics” as being the heart and sole of the patient safety movement.
But Dekker also talks about what should get reported, noting that what are considered near-misses by some are not by others. The “local rationality” concept may lead to some practitioners considering some events as “normal”.
Getting people to report (and sustaining reporting) is all about building trust. And, interestingly, work of many researchers suggests that fear of punitive action or retribution is not the major reason people don’t report. Rather it is lack of conviction that the organization will use the reports for meaningful learning. Building trust that such reports will lead to meaningful improvements in safety is empowering and apparently a much better incentive to sustain event reporting systems.
He goes on to describe what a safety department should look like and emphasizes the four “I’s”: informed, independent, informative, and involved (noting that balancing some of the “I’s” against one another is sometimes difficult). He makes a very strong case for the safety department to be independent, both from a political and financial perspective. While needing immediate access to top-level decision makers in the organization, it needs to be independent in dealing with the inevitable efficiency/safety tradeoffs that occur in any organization. Its budget, in particular, needs to be insulated from cost-cutting during periods of economic difficulties. Those are the times that safety issues are especially likely to occur, as other parts of the organization try to deliver outcomes despite resource challenges. Staying informed means keeping close to operational activities so one can understand workflows and factors related to actual operations and being able to convey to all levels of the organization the various perspectives on safety (and efficiency). Similarly, the safety department needs to be informative to both upper levels and the front line. Interestingly, he points out that part-time members of the safety department are especially valuable, because they typically also work in other areas of the organization at an operational level.
His discussion on adverse event investigations is excellent. He nicely discusses moving from a “first” story, where organizations tend to focus on blaming individuals and consider the system “safe”, to the human factors approach where the focus is not on individuals but rather on the system and how system factors influenced how individuals may have acted during an evolving event. He spends a lot of time on avoiding hindsight bias (and outcome bias). He also dislikes the term “root cause analysis” since most root causes identified are still quite subjective and prone to various biases. He prefers to focus more on the “doables” in an RCA or any adverse event investigation. The focus is really on putting yourselves in the minds of the event participants, trying to see what they were seeing as the event unfolded, how information was available to them, what conflicting goals they had, etc. Only then can the system be redesigned or otherwise altered to prevent the same thing from happening to other workers at the “sharp” end.
Involving practitioners who were actually participants in the event is important, not only because of their unique perspectives but also because it provides them a sense of contributing to meaningful safety improvements. Also, when potential solutions ultimately get implemented, most practitioners are influenced more by their peers than by top-down communication.
His section on communication and coordination is great. He discusses the technique of conversation analysis, often used in aviation event investigation. Research using that technique notes that sometimes two parties talk simultaneously (“overlapping” talk), sometimes there is no response when a response is clearly anticipated, and “repair” where there are attempts to recover from some other sort of communication problem. All are potential signs of problems. He goes on to discuss the research on “mitigation” which means reducing the severity, seriousness or painfulness of something. He does this with a healthcare anecdote about the need to giving a patient an additional 5 ml of a medication and shows six ways in which one practitioner may communicate that with another. The “mitigated” ways are not likely to get the job done. There is a great discussion about how social interaction and “political correctness” may interfere with effective communication in high-risk settings. NASA had established a group of recommendations for effective communication: opening (getting someone’s attention), concern (stating the level of concern), problem (clearly defining the problem at hand), solution (suggesting a possible course of action), and agreement. Note the similarities to the SBAR format we often recommend in healthcare for handoffs and other interactions?
Teamwork and crew resource management techniques are also important. An interesting fact that we were not previously aware of is that serious aviation accidents are more likely when the captain is flying the aircraft (ordinarily the captain and copilot split flying about 50/50, with the nonflying person attending to a whole host of other activities in addition to observing the pilot). This is likely a reflection of a hierarchical structure and failure to speak up. He spends a lot of time talking about methods to get people to speak up in a variety of settings (eg. preoperative briefings). And he really focuses on the need for diversity on healthcare teams. By that he means that it is important to have people on the team who bring different skills and expertise and have different perspectives. That is often helpful in a rapidly evolving situation, though he also cautions that too many participants may lead to “groupthink” which often ends up in more extreme solutions. In addition to briefings, he has a good discussion on checklists (which serve both as memory tools and means of communicating).
A full chapter on all the concepts involved in developing a “just” culture is very informative. And he finishes with a chapter on future thinking in healthcare, in which he makes the case that healthcare is complex rather than complicated. The distinction is not esoteric. Complicated systems are still stable, somewhat predictable systems whereas complex ones are dynamic, always changing and subject to multiple interactions with humans and the environment.
This is not the kind of book that you can’t put down. You will gain the most from it by going back and reading it several times. But the concepts are most powerful and you cannot be involved in patient safety today without understanding all the research on human factors that is available. Dekker, as he has in all his prior works, does not disappoint. This is one solid addition to your patient safety library.
Reference:
Dekker S (2011): Patient Safety. A Human Factors Approach. Boca Raton: CRC Press. Taylor & Francis Group.
http://www.patientsafetysolutions.com/
What’s New in the Patient Safety World Archive