All of you have probably had fun at one time or another uttering short phrases or paragraphs into speech recognition software and chuckling at the bizarre things that pop up on your screen. But suppose those bizarre things were popping up in medical reports you were sending out to others.
Many different venues in medicine have adopted speech recognition software to improve efficiencies and reduce costs. It can be especially useful in those areas where timeliness of reports is important. For example, an emergency department physician can dictate via speech recognition software and have immediate access to a typed note that he/she can edit and sign before the end of his/her shift. That also means the note will be immediately available to all others with access to the computer system (with conventional dictation transcription services the typed note may not be available for 24 hours).
Radiology is another area where timely reports may be useful and radiology is typically the service that adopted speech recognition software longest ago. Radiology reports, whether “normal” or “abnormal” are very structured and one may develop templates that nicely meet many aspects needed in a report. For example, one could state “load normal chest x-ray template”, then make any minor editing changes needed for the actual chest x-ray you are reading. That can save considerable time compared to dictating the whole report from scratch. Indeed, use of speech recognition software systems has improved radiology report turn-around times (TAT’s) considerably.
However, as reports get more complex templates become less useful and dictation process becomes more complicated. This substantially increases the chance that errors may appear in the reports.
A study just published (Basma 2011) found that error rates in breast imaging reports were substantially higher in those done by speech recognition software compared to traditional dictation transcription. In fact, at least one major error was found in 23% of reports dictated by automated speech recognition compared to 4% by traditional dictation transcription, an error rate almost 8 times higher! They found that there was no difference in error rate whether the report was dictated by a resident/fellow vs. attending radiologist nor whether the person dictating had English as his/her first language. The modality for which the report was being done did influence the error frequency, with those requiring more complicated reports (eg. breast MRI) having more frequent errors.
The types of errors were similar in the reports generated by speech recognition software and traditional dictation. The most common error was addition of a word but word omission, word substitution, and punctuation errors were common. Incorrect measurements or incorrect units of measurement were also seen. Errors were most common in the “Findings” section of the imaging reports but could be found in most sections.
Because the above study was done at a single academic teaching organization and most cases were discussed at a multidisciplinary case conference before interventions were done, the authors did not feel any patient was adversely impacted by these errors. However, one can readily anticipate how such errors could adversely impact patient care, particularly if the physician receiving the report is outside the hospital system and does not have access to the images themselves. Imagine the impact of omission of “no” before “evidence of cancer” or, conversely, erroneous addition of “no” before “evidence of cancer”!
There are reasons other than faulty software that also come into play whether you are using speech recognition software or other dictation systems. Background noise, the type of microphone you are using, etc. may influence the output. Similarly, systems may not pick up voice inflections where you are trying to emphasize something. So don’t just blame your system. Make sure you appropriately review and edit your reports or notes.
Radiologists themselves are unaware of the frequency of errors in reports generated by speech recognition software. One study (Quint 2008) found errors in 22% of radiology reports where radiologists estimated the error rates would be well less than 10% for the radiology department as a whole and even less frequent for themselves. These were errors that could convey incorrect or confusion information. Examples included incorrect words, omitted words, nonsense phrases, added words, right-left substitutions, incorrect measurements or units of measurement. Incorrect image numbers or errors related to templates were also seen. A large number of nonsense errors with speech recognition technology was also seen in a recent study from Australia (Chang 2011).
When speech recognition software first became available, accuracy rates around 95% were often quoted. Such was hardly acceptable because it took more time to edit the reports that to just use traditional dictation systems. But as the accuracy of speech recognition systems has improved, they have become much more efficient and save time and costs. But no system is perfect and it is imperative that careful review and editing of each report be done before it is signed. Also keep in mind that some systems may allow “pending” (not yet reviewed or signed by the radiologist) reports to be seen.
Even when your errors do not impact patient care directly, they can be a reflection of your professionalism. If we see a report or note that is replete with grammatical or punctuation errors, we may imply some degree of sloppiness or disorganization and assume, incorrectly or not, a similar sloppiness in the thinking processes of the author.
And then there are always “automation surprises” (see our November 6, 2007 Patient Safety Tips of the Week “Don Norman Does It Again!” and May 19, 2009 “Learning from Tragedies”). Our favorite example is when we try to type EHR (for electronic health record) and our word processor’s spell checker automatically changes it to HER (try it – yours probably does this too!).
Even as we do this column each week and review it twice after running it through a spell checker and a grammar checker, we are sometimes aghast when we see obvious mistakes in the online posting that we missed.
So how do mistakes get overlooked when we review and edit our reports? The number one contributory factor is usually time pressure. In our haste to get the report done and the big queue of other reports to review, we simply don’t review and edit thoroughly. One of the early studies on report errors related to speech recognition systems (McGurk 2007) noted that such errors were more common in busy areas with high background noise or high workload environments. Though not statistically significant, there was a trend toward lower error rates with more junior staff (note that Basma et al had also noted fewer minor errors in reports where junior staff were involved).
But a second phenomenon happens as well. Our mind plays tricks on us and we often “see” what we think we should see. We show many examples during some of our presentations of orders or chart notes that have obvious omissions where the audience unconsciously “fills in the gaps” and thinks they saw something that wasn’t there (“of course they meant milligrams”). It is easy for us to do the same thing when we are reading our own reports. In addition, the “recency” phenomenon probably comes into play, where the radiologist perceives he/she sees what he/she just dictated. The Quint paper suggests that mistakes like this may actually be more frequent the sooner you are reviewing your report. They even suggest that reviewing your report 6-24 hours after dictation rather than immediately may reduce the error rate.
Dictating in an environment with minimal background noise can help reduce errors. And McGurk et al note that use of “macros” for common standard phrases also reduces the error rates.
We’re willing to bet that most of you have no idea what your error rate is, regardless of whether you are using automated speech recognition software or traditional dictation transcription services.
Obviously, you need to include an audit of report errors as part of your QI process, not only for radiology but for any service that does reports of any kind, whether done by speech recognition software or more traditional transcription. While random selection of reports to review is a logical approach, there are other approaches that may make more sense. Part of the peer review process in radiology is to have radiologists review the images that a colleague had reported and see if the findings concur. One could certainly add checking for report errors as part of that process. In the Quint paper, the reports were analyzed as they came up as part of their weekly multidisciplinary cancer conference. Reviewing them is a fashion like this makes the review more convenient but also adds context to the review. One gets to see how the errors could potentially impact patient care adversely. We like that approach where such multidisciplinary conferences take place. It also tends to raise the awareness of the existence and scope of report errors among not only the people generating the reports, but also those reading the reports.
Your radiology report is really your interface with the rest of the healthcare system (Kanne 2011) so you want to make sure you get it right. Integrating evaluation of your reports into your QI program thus is critical.
So make sure you are determining your error rates in all your dictated reports (whether traditional or speech recognition format) and feeding back those error rates to the providers doing the reports. Such feedback to the providers doing the reports was important in reducing the error rates in the study by McGurk et al.
References:
Basma S, Lord B, Jacks LM, et al. Error Rates in Breast Imaging Reports: Comparison of Automatic Speech Recognition and Dictation Transcription. AJR 2011; 197: 923-927
http://www.ajronline.org/content/197/4/923.abstract
Quint LE, Quint DJ, Myles JD. Frequency and Spectrum of Errors in Final Radiology Reports Generated With Automatic Speech Recognition Technology. Journal of the American College of Radiology 2008; 5(12): 1196-1199
http://www.jacr.org/article/S1546-1440%2808%2900361-X/abstract
Chang CA, Strahan R, Jolley D. Non-Clinical Errors Using Voice Recognition Dictation Software for Radiology Reports: A Retrospective Audit. Journal of Digital Imaging 2011; 24(4): 724-728
http://www.springerlink.com/content/94gxn0q8765451h5/
McGurk S, Brauer K, MacFarlane TV, Duncan KA. The effect of voice recognition software on comparative error rates in radiology reports. Br. J. Radiol. 2008; 81: 767-770
Kanne JP. Quality Management in Cardiopulmonary Imaging. Journal of Thoracic Imaging 2011; 26(1): 10-17
http://www.patientsafetysolutions.com/
What’s New in the Patient Safety World Archive