A newspaper in my mother’s town ran an article announcing that the emergency room received a Women’s Choice Award and was “among America’s best hospitals for emergency room care”. I was curious about the award, so I looked into this. I am sharing what I learned because it is an example of how publicly available hospital performance datasets can be used to mislead and misinform healthcare consumers.
The website of this company, called WomenCertified, has a list of “America’s Best Hospitals for Emergency Care” for 2015. These can be sorted by state, and Kansas has sixteen hospitals listed.
The company explains the purpose of these ratings in this way:
Choosing the right hospital can greatly increase your health outcomes and we’re here to help simplify your choice. To further support your decision, we’ve featured the best hospitals for emergency care below.
Last I heard, ranking and rating hospitals in a way that has meaning and value to healthcare consumers is a difficult and messy problem with no easy answers. Perhaps someone has figured this out?
Their award methodology page provides details. They began with the data released by the Center for Medicare and Medicaid Services (CMS). They selected eight measures related to emergency visits. They surveyed women online about which of the eight measures were most important to them, then weighted the hospital data according the survey results. Hospitals that scored very poorly in patient recommendations were eliminated. The remaining top 10% received an award.
This brings up a couple questions: 1) How did they deal with missing data? Unless every emergency room in your area is providing all the data needed to CMS, it is going to be difficult to compare all emergency rooms in your area. 2) Why does this analysis depend on ranking the measures of emergency department performance? Are we trusting that women polled over the internet have greater knowledge of the metrics of emergency room quality than people who are trained in analysis of healthcare data?
This is the ranking that comes out of the online survey, most important to least important:
- Average number of minutes before outpatients with chest pain or possible heart attack got an ECG
- Average time patients spent in the emergency department, before they were admitted to the hospital as an inpatient
- Average time patients spent in the emergency department before they were seen by a healthcare professional
- Average time patients spent in the emergency department, after the doctor decided to admit them as an inpatient before leaving the emergency department for their inpatient room
- Average time patients spent in the emergency department before being sent home
- Average time patients who came to the emergency department with broken bones had to wait before receiving pain medication
- Percentage of patients who came to the emergency department with stroke symptoms who received brain scan results within 45 minutes of arrival
- Percentage of patients who left the emergency department before being seen
(I guess this means that women highly value quick response times if you might be having a heart attack. Not so much if you might be having a stroke.)
Learning more about the methodology
I wrote to the company:
I received a response:
So that answered my question about missing data. If a measure was not reported, it was equivalent to a zero. And that is a pretty steep weighing based on the survey results.
Looking at the CMS data
I was curious just how much missing data there was. I downloaded the CMS data and took a look. I used the most recent data available (July 1, 2013 – June 30, 2014). If you download the “CVS flat files” you can open the data in a program such as Microsoft Excel. (It is a large file. The first thing I did was to locate all the Kansas hospitals and make a smaller file of just that data.)
The extent of the missing data was worse than I imagined. Of the 130 hospitals in Kansas, less than half had data available for their emergency rooms.
Making some graphs
For each of the top four measures in the survey I graphed out the data for each Kansas hospital. Each hospital is represented by a circle, with filled hospitals receiving an award. Notice that there is a big block of circles to the right of each graph, representing each hospital without data.
These graphs tell us that of the hospitals reporting data, the ones that received the award tend to have shorter wait times. This makes sense, because both the measures and the survey stress short wait times.
It is possible that some of the hospitals without data do not have emergency rooms. I looked around for data on the number of emergency rooms in Kansas, and found a site that lists the number of annual ER visits in Kansas by hospital. There are 130, so there really is a huge problem with missing data.
What can we conclude from this?
1. If you are looking for an emergency room, you cannot conclude anything from this award unless all the emergency rooms you are considering are represented in the CMS dataset. You cannot conclude that the hospital without data are inferior. Their data is unknown. Even if all the hospitals you are considering do have data, it is still unclear what conclusions can be reached. A methodology that weights measures based on an internet survey may be “objective and uniform” (as the email points out), but there is no evidence that it is meaningful or valid for making healthcare decisions.
2. If you are experiencing a medical emergency your situation is local, so knowing there is a more highly ranked emergency room elsewhere in the state is not helpful. Getting to the nearest emergency room should be your goal. This is one reason that not even the US News and World Report (which publishes a popular list of best hospitals that has met strong criticism from many) publishes rankings of emergency departments. They also point out the need to develop meaningful measures of quality for emergency care, which is a current research problem.
3. Evidently, there is money to be made in recommending products and services. WomenCertified creates awards based on internet surveys, and then (as explained on the disclaimer page) “receives a licensing fee and/or advertising exposure from businesses that enter into a Partnership or Corporate Partnership with WomenCertified Inc.” This company gives awards to many types of products and services (not just hospital care), such as breakfast cereal, pet care products, and financial advisers.
I want to make it clear that WomenCertified is not an expert in healthcare. It says so on their disclaimer page:
Now, it seems to me, that if you are awarded an award for excellence in your industry, but the organization presenting the award is not an expert in your industry, you are going to be a little cautious about how you interpret that award. (Or, as we have seen, maybe not.)
4. Small town newspapers do not always ask questions about the news articles they publish. (The newspaper article I referred to was written by a staff member at the hospital.) Shouldn’t there be a separation between news and advertising?
Where does this leave healthcare consumers concerned with patient safety?
I fully support making hospital performance data public. But one of the unfortunate consequences is that it will be misused. As an information designer working in biomedical informatics, I understand the need to simplify complex data. But I also believe there is a moral responsibility when presenting information, especially when that information is intended to inform healthcare decisions.
Healthcare consumers are in a tough position. It took quite a bit of time to dig into what this award represents and what it does not represent. This is a case where a newspaper article was really an advertisement that conveniently left out the details necessary to understand the information. I expect better of healthcare providers. And I expect better of newspapers.
It is good for consumers to make informed use of hospital performance measures. But look at the real data on the Hospital Compare website.