A young woman presents to a busy hospital emergency room complaining of tingling in her fingers and toes. Her neurological exam is “WNL”. When asked if she’s been having trouble with her boyfriend she replies “yes”. She is sent home with a diagnosis of anxiety and hyperventilation. She returns in 48 hours with quadriparesis and a vital capacity of 1000 cc. and needs urgent intubation. It’s now apparent she has acute Guillain-Barre syndrome. (Ropper 1991)
A patient presents to a busy clinic complaining of abdominal pain and 3 days of constipation not responding to over-the-counter laxatives. He’s given a cursory examination and prescription for stronger laxatives. Later that evening he dies from a ruptured abdominal aortic aneurysm. (Croskerry 2010)
An overweight woman with diabetes presents to her primary care physician’s office with an axillary rash. She’s diagnosed as having intertrigo and given a prescription for a cortisone/antifungal ointment. Shortly thereafter, while the PCP was away on vacation, the PCP’s partner sees the patient for complaints about knee pain and other joint pain. That physician attributes the pain to arthritis related to obesity and prescribes ibuprofen. The patient later sees a rheumatologist who notes the rash was an erythematous ring with central clearing, typical of Lyme Disease. (Wellbery 2011)
These are but a few examples of diagnostic errors that illustrate some of the biases in our thinking and decision-making that can lead to erroneous diagnoses and care plans. The first example was a scenario we saw several times when putting together a monograph on Guillain-Barre syndrome (GBS). It is an example where we can be misled by normal findings on examination (or other tests). Early in the course of GBS, strength and gross sensory testing may indeed appear to be normal on examination. However, there are usually other clues present. Early loss of deep tendon reflexes and the presence of an unexplained sinus tachycardia are common. So the clinician looks for alternative diagnoses and might think about psychiatric diagnoses. When he/she hears about the “trouble with the boyfriend”, that may be taken as evidence supporting such a diagnosis (“confirmation bias”).
The second case is one unfortunately played out all too frequently. It illustrates a problem with “framing”. The clinician latches on to the “constipation” part of the history and frames the diagnostic thought process around that. Had the patient presented with just complaints about abdominal pain, the clinician might have considered abdominal aortic aneurysm in the differential diagnosis.
The third case also illustrates a problem with framing but also demonstrates “availability bias”. That is where something that comes to mind with ease or is familiar because it has recently been seen. In this scenario the clinician thought about what is probably the commonest cause of an axillary rash in an overweight diabetic patient.
Two of the more dangerous biases we are vulnerable to are anchoring and premature closure. “Anchoring” is where we latch on to one diagnosis and fail to consider others. “Premature closure” is accepting a diagnosis before it has been fully verified. In fact, Graber and colleagues (Graber 2005) noted premature closure to be the single most common cognitive process contributing to diagnostic error. It is obvious that with both these biases we may put in motion a management plan that is totally inappropriate and even make incorrect adjustments to that plan when the improvement we expected fails to occur.
We’ve mentioned anchoring previously and noted it becomes a more significant problem once a diagnosis or other decision has been declared publicly. Many of you have done an exercise in executive training where a scenario is presented in which you must state a position publicly. You are then given a bit of disconfirming evidence and a chance to change your decision. Almost no one changes their decision! (The scenario is actually a poorly disguised parallel of the Challenger disaster). Another example is when we point out that a geriatric patient is on a drug on Beers’ list. The physician almost never takes that patient off the drug but may in the future be less likely to prescribe that drug in other geriatric patients.
Another cognitive error often encountered is the tendency to attribute all signs or symptoms to one condition. In the study by Graber and colleagues they cite an example of a patient presenting with retrosternal and upper epigastric pain who was found to have new Q waves on EKG and elevated troponin levels. All symptoms were ascribed to acute MI and a coexisting perforated ulcer was missed.
Our September 28, 2010 Patient Safety Tip of the Week “Diagnostic Error” highlighted a review article by the Pennsylvania Patient Safety Authority (PPSA 2010) discussing many cognitive biases, including the availability bias, confirmation bias (and its corollary – dismissing contrary evidence), anchoring and others, such as premature closure, context errors, and satisficing (accepting any satisfactory solution rather than an optimal one). And it talks about communication issues across the continuum of care. But, importantly, it also emphasizes that system-related factors (remember: the system is usually much easier to change than the human factors) do commonly contribute to diagnostic errors and that strategies to minimize those may reduce diagnostic errors. Such system-related factors include things like specimen labeling, communication of abnormal results to physicians, communication of revised reports to physicians, physician followup with patients, and managing the patients across transitions of care.
We have also previously discussed the cognitive processes and decision making processes that healthcare workers use. We have discussed the work of people like Gary Klein (see our May 29, 2008 Patient Safety Tip of the Week “If You Do RCA’s or Design Healthcare Processes…Read Gary Klein’s Work”) on pattern recognition and recognition-primed decision making that typically takes place in more acute scenarios. And Malcolm Gladwell’s “Blink” in our Patient Safety Library also focuses on how we typically use that more intuitive mode of decision-making for most of our decisions in life. And we discussed the work of Jerry Groopman (see our August 12, 2008 Patient Safety Tip of the Week “Jerome Groopman’s “How Doctors Think”) on the day-to-day thinking that takes place in interacting with patients. Both types of cognitive approaches have their upsides and downsides but both also tend to fall into similar cognitive error traps.
A terrific video on the pitfalls involved in diagnostic thinking and decision making was presented by Pat Croskerry at the Risky Business conference (Croskerry 2010). He utilizes multiple visual and written props to demonstrate how our cognitive thinking is influenced by the manner in which we “see” things. They are great ways of showing how we tend to fixate on the first things we see. He goes on to discuss “intuitive” vs. “rational” thinking and that, though there are some advantages to intuitive thinking there are more dangers and we actually probably spend 95% of our time in the “intuitive” sphere.
Croskerry in another paper (Croskerry 2009) nicely describes the two approaches most commonly taken in decision making and proposes a model of diagnostic reasoning. He points out that the more “intuitive” approach and the “analytical” approach are context-sensitive and that both approaches may be used under certain circumstances. Importantly, sometimes even the analytical approach may be overridden by the intuitive approach at times, increasing the likelihood of a diagnostic error. He points out that even when well-developed clinical decisions rules have been shown to outperform individual decision making, some physicians persist in an irrational belief that they still know what’s best for the patient and that this “overconfidence” is a major source of diagnostic error. The PPSA article (PPSA 2010) actually speaks about the overconfidence that clinicians have in their diagnostic capabilities and attributes some of that overconfidence to the fact they often get no feedback about when their diagnoses are wrong.
Statistics from CRICO/RMF, which is the medical malpractice carrier/risk management organization for the Harvard hospitals and health systems, show that diagnostic error is the highest risk area for malpractice claims, accounting for 26% of all claims. Though the true incidence of diagnostic error is unknown, studies have estimated the rate to be approximately 15%, which is in keeping with the rate detected in many autopsy series (Graber 2005). But one pretty startling statistic is that in a study of autopsy reports from an ICU population 43% of patients died from causes not considered or addressed in the care team’s treatment plan (Winters 2011).
Data from closed malpractice claims files may not be reflective of the types of diagnostic errors commonly made in practice. In the pediatric study done by Singh et al (Singh 2010) very different types of error were noted in practice compared to that published from claims databases in the pediatric literature. In a survey of academic and community physicians and housestaff, they found that 54% of respondents noted making diagnostic errors once or twice a month and 45% noted making errors that led to patient harm once or twice a year. Failure to gather available medical information from history, physical, and old records was cited as the most frequent process breakdown but failure to achieve timely followup by the patient or caregiver was a close second. The study also inquired about specific biases and found that being too focused on a diagnosis or treatment plan was the leading bias. Another bias was being misled by a normal result (history, physical, lab or imaging study). Of strategies mentioned to reduce diagnostic errors, the two most frequently mentioned were (1) closer followup of patients and (2) use of electronic health records and decision support.
Perhaps one of the reasons there has been a dearth of research on diagnostic error is that such are commonly considered to be purely cognitive errors. But the review by Graber and colleagues (Graber 2005) notes the interplay between cognitive factors and system-related factors in leading to diagnostic errors, typical of the cascade of events we usually see when we do a root cause analysis of cases where patient harm occurred. Faulty knowledge or skills was actually seldom a factor contributing to diagnostic error. Most common was faulty data gathering or flawed processing of gathered information.
The biggest problem with diagnostic error is that the provider often is unaware that an error occurred at all, i.e. there is little feedback. So how do we better identify diagnostic error so that we may provide feedback to all involved? Singh and colleagues (Singh 2007) applied the concept of the trigger tool to help identify instances of diagnostic error. The used 2 electronic screening algorithms to identify cases for further chart review. One algorithm identified primary care visits that were followed by a hospitalization within the next 10 days. The other identified primary care visits followed within 10 days by one or more primary care visits, urgent care visits, or emergency department visits. When the medical records were reviewed blindly evidence of diagnostic errors were found in 16.1% and 9.4% identified by the first and second algorithms, respectively. These compared to 4% diagnostic error rates found in randomly selected charts. In addition to diagnostic errors, the reviewers often found evidence of other management errors (eg. inappropriate antibiotic use, failure to adjust medication dose, failure to monitor lab values, etc.) in the above reviews. The positive predictive value for the first screen actually increased to 24.4% when adjusted for things like planned hospitalizations and to 33% to detect any error (diagnostic or management error). So these rates are as good or better than those in trigger tools looking for medication errors. While the spectrum of diagnostic errors was found, the most common errors were failure or delay in eliciting information and misinterpretation or suboptimal weighing of critical pieces of data from the history and physical examination. Other common errors were failure to order or delay in ordering needed tests or failure to follow up test results.
There is even less literature on interventions to prevent diagnostic errors. We’ve often talked about asking yourself “What is the worst thing this could be?”. This may help you refocus and avoid anchoring, premature closure, and other cognitive biases. Most neurologists already do this. For example, when seeing a patient with a headache in the emergency room, we typically ask ourselves “What is the worst thing this could be?” and that usually would be a subarachnoid hemorrhage. Therefore, we are unlikely to send that patient home until we feel comfortable the patient does not have a subarachnoid hemorrhage.
But we need something more than just remembering to ask that question. Ely and colleagues (Ely 2011) have suggested use of checklists to help avoid diagnostic errors. They actually proposed 3 types of checklists. One is a general checklist to prompt physicians to optimize their cognitive approach (and avoid cognitive biases such as premature closure). Second is a differential diagnosis checklist based upon the presence of specific signs and symptoms, which helps to avoid one of the most common causes of diagnostic error – failure to consider the diagnosis. The third is a forcing function checklist that helps physicians consider some of the common pitfalls in recognizing specific diseases. The accompanying editorial (Winters 2011) supports the concept of using checklists to prevent diagnostic errors but notes that the next step, formally testing these checklists in a rigorous manner, is the difficult step. They need to show not only that the checklists reduce such errors but also that they will actually be used by clinicians in routine practice. Schiff and Bates (Schiff 2010) proposed a number of ways that electronic health records might be used to improve diagnostic accuracy and prevent diagnostic error. They provide at least 15 examples of how EHR’s might accomplish this. They especially talk about the way that the EHR might be used to visually present data in a more useful way (eg. trended data) to facilitate diagnostic thinking. They also note that there need to be changes in workflows and work layouts to facilitate having both the patient and physician involved interactively.
The PPSA review also provides a couple nice tools to help clinicians identify and avoid diagnostic errors. One is a chart audit tool to help identify errors adopted from the article by Schiff et al (Schiff 2009). The other is a simple checklist the clinician can use to help focus the things he/she needs to do to in each case avoid diagnostic errors.
The Singh pediatric study (Singh 2010) also notes the need for better use of simulation exercises in our training. We need more focus on diagnostic decision making in our medical schools and residency programs. Many of our medical schools already utilize simulations involving trained actors to improve our interviewing skills and diagnostic skills. Our August 10, 2010 Patient Safety Tip of the Week “It’s Not Always About The Evidence” discussed “contextual errors” and provided examples of how simulation exercises can be used to point out how contextual “red flags” may be missed, resulting in erroneous care.
And our frequent focus on the need for better teamwork training has relevance to diagnostic error as well. The Croskerry video discusses how use of team thinking may improve diagnostic thinking. While admitting that occasionally a dominant team member who is in “intuitive mode” might get the group to come to wrong decisions, in general working in a team requires that you “think out loud”, which gets everyone to stop and think more in the rational or analytic mode. This is good since errors are much more frequent when we are functioning in the intuitive mode. The Winters et al. editorial (Winters 2011) also notes the generally positive influence of teams on decision making because of diverse input, and a balance of interdependent discussion with independent voting. Interestingly they note that honeybees use group decision making very successfully!
All our discussion on cognitive errors does not just apply to diagnostic errors. Keep in mind that we can make the same sorts of cognitive errors when doing our root cause analyses (RCA’s). Anchoring, availability bias, confirmation bias, and others are common mistakes we make that may prevent us from coming up with the best solutions in RCA’s.
And don’t forget that the same cognitive biases that affect our healthcare lives may also impact our decision-making processes in our day-to-day lives!
Some of our prior Patient Safety Tips of the Week on diagnostic error:
· September 28, 2010 “Diagnostic Error”
· May 29, 2008 “If You Do RCA’s or Design Healthcare Processes…Read Gary Klein’s Work”)
· August 12, 2008 “Jerome Groopman’s “How Doctors Think”)
· August 10, 2010 “It’s Not Always About The Evidence”
· And our review of Malcolm Gladwell’s “Blink” in our Patient Safety Library
References:
Ropper AH, Wijdicks EFM, Truax BT: Guillain-Barre Syndrome. FA Davis: Philadelphia 1991; pp. 224-225
Wellbery C. Curbside Consultation. Flaws in Clinical Reasoning: A Common Cause of Diagnostic Error. American Family Physician 2011; November 1 2011 Vol. 84 No. 9 (online)
http://www.aafp.org/afp/2011/1101/p1042.html
Croskerrry P. Clinical Decision Making and Diagnostic Error (video). Risky Business. 2010
http://www.risky-business.com/talk-128-clinical-decision-making-and-diagnostic-error.html
Graber ML, Franklin N, Gordon R. Diagnostic Error in Internal Medicine. Arch Intern Med 2005; 165: 1493-1499
Pennsylvania Patient Safety Authority (PPSA). Diagnostic Error in Acute Care. Pa Patient Saf Advis 2010 Sep;7(3):76-86
http://patientsafetyauthority.org/ADVISORIES/AdvisoryLibrary/2010/Sep7%283%29/Pages/76.aspx
Croskerry P. A Universal Model of Diagnostic Reasoning. Academic Medicine 2009; 84(8): 1022-1028
CRICO/RMF. High-Risk Areas.
http://www.rmf.harvard.edu/high-risk-areas/diagnosis/index.aspx
Winters BD, Aswani MS, Pronovost PJ. Commentary: Reducing Diagnostic Errors: Another Role for Checklists? Academic Medicine 2011; 86(3): 279-281
Singh H, Thomas EJ, Wilson L, et al. Errors in Diagnosis in Pediatric Practice: A Multisite Survey. Pediatrics 2010; 126(1): 70-79
http://pediatrics.aappublications.org/content/126/1/70.full.pdf+html
Singh H, Thomas EJ, Khan MM, Petersen LA. Identifying Diagnostic Errors in Primary Care Using an Electronic Screening Algorithm. Arch Intern Med 2007; 167: 302-308
Ely JW, Graber ML, Croskerry P. Checklists to Reduce Diagnostic Errors. Academic Medicine 2011; 86(3): 307-313
Schiff GD, Bates DW. Can Electronic Clinical Documentation Help Prevent
Diagnostic Errors? NEJM 2010; 362(12): 1066-1069
http://www.nejm.org/doi/pdf/10.1056/NEJMp0911734
Pennsylvania Patient Safety Authority. A Physician Checklist for Diagnosis.
Schiff GD, Hasan O, Kim S; et al. Diagnostic Error in Medicine: Analysis of 583 Physician-Reported Errors. Arch Intern Med, Nov 2009; 169: 1881 – 1887
http://www.patientsafetysolutions.com/