You can guarantee that during any discussion about human factors in Medicine the statistic that medical errors are the third most common cause of patient death will be thrown up. A figure of 250,000 to 400,000 deaths a year is often quoted in the media. It provokes passionate exhortations to action, of new initiatives to reduce error, for patients to speak up against negligent medical workers.
It’s essential that everyone working in healthcare does their best to reduce error. This blog is not looking to argue that human factors aren’t important. However, that statistic seems rather large. Does evidence really show that medical errors kill nearly half a million people every year? The short answer is no. Here’s why.
Where did the figure come from?
The statistic came from a BMJ article in 2016. The authors Martin Makary and Michael Daniel from John Hopkins University in Baltimore, USA used previous studies to extrapolate an estimate of the number of deaths in the US every year due to medical error. This created the statistic of 250,000 to 400,000 deaths a year. They petitioned the CDC to allow physicians to list ‘medical error’ on death certificates. This figure, if correct, would make medical error the third most common cause of death in the US after heart disease (610,000 deaths a year) and cancer (609, 640 deaths a year.) If correct it would mean that medical error kills ten times the number of Americans that automobile accidents do. Every single year.
Problems with the research
Delving deeper Makary and Daniel didn’t look at the total number of deaths every year in the US, which is 2,813,503. Instead they looked at the number of patients dying in US hospitals every year, which has been reported at 715,000. So if Makary and Daniel are correct with the 250,000 to 400,000 figure that would mean that 35-58% of hospital deaths in the US every year are due to medical error. This seems implausible to put it mildly.
It needs to be said that this was not an original piece of research. As I said earlier this was an analysis and extrapolation of previous studies all with flaws in their design. In doing their research Makary and Daniel used a very broad and vague definition of ‘medical error’:
“Medical error has been defined as an unintended act (either of omission or commission) or one that does not achieve its intended outcome, the failure of a planned action to be completed as intended (an error of execution), the use of a wrong plan to achieve an aim (an error of planning), or a deviation from the process of care that may or may not cause harm to the patient.”
It’s worth highlighting a few points here:
Let’s look at the bit about “does not achieve its intended outcome”. Let’s say a surgery is planned to remove a cancerous bowel tumour. The surgeon may well plan to remove the whole tumour. Let’s say that during the surgery they realise the cancer is too advanced and abort the surgery for palliation. That’s not the intended outcome of the surgery. But is it medical error? If that patient then died of their cancer was their death due to that unintended outcome of surgery? Probably not. Makary and Daniel didn’t make that distinction though. They would have recorded that a medical error took place and the patient died.
There was no distinction as to whether deaths were avoidable or not. They used data designed for insurance billing not for clinical research. They also didn’t look at whether errors “may or may not cause harm to the patient”. Just that they occurred. They also applied value judgements when reporting cases such as this:
“A young woman recovered well after a successful transplant operation. However, she was readmitted for non-specific complaints that were evaluated with extensive tests, some of which were unnecessary, including a pericardiocentesis. She was discharged but came back to the hospital days later with intra-abdominal hemorrhage and cardiopulmonary arrest. An autopsy revealed that the needle inserted during the pericardiocentesis grazed the liver causing a pseudoaneurysm that resulted in subsequent rupture and death. The death certificate listed the cause of death as cardiovascular.”
Notice the phrase “extensive tests, some of which were unnecessary”. Says who? We can’t tell how they made that judgement. It is unfortunate that this patient died. Less than 1% of patients having a pericardiocentesis will die due to injury due to the procedure. However, bleeding is a known complication of pericardiocentesis for which the patient would have been consented. Even the most skilled technician cannot avoid all complications. Therefore it is a stretch to put this death down to medical error.
This great blog by oncologist David Gorksi goes into much more detail about the flaws of Makary and Daniel’s work.
So what is the real figure?
A study published earlier this year (which received much less fanfare it has to be said) explored the impact of error on patient mortality. They studied the impact of all adverse events (medical and otherwise) on mortality rates in the US between 1990 and 2016. They found that the number of deaths in that whole 26 year period due to adverse events was 123,603. That’s 4754 deaths a year. Roughly one hundredth the figure banded around following Makary and Daniel (2016). Based on 2,813,503 total deaths in the US every year that makes adverse events responsible for 0.17% of deaths in the US. Not a third. 0.17%.
Of course, 4754 deaths every year due to adverse events is 4754 too many. One death due to adverse events would be one too many. We have to study and change processes to prevent these avoidable deaths. But we don’t do those patients any favours by propagating false figures.
Thanks for reading.
- Jamie