Documente Academic
Documente Profesional
Documente Cultură
© 2019 IBM Watson Health. All rights reserved. IBM, the IBM logo, ibm.com,
Watson Health, and 100 Top Hospitals are trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service
names might be trademarks of IBM or other companies.
The information contained in this publication is intended to serve as a guide for general
comparisons and evaluations, but not as the sole basis upon which any specific
conduct is to be recommended or undertaken.
The reader bears sole risk and responsibility for any analysis, interpretation, or
conclusion based on the information contained in this publication, and IBM shall not
be responsible for any errors, misstatements, inaccuracies, or omissions contained
herein. No part of this publication may be reproduced or transmitted in any form or
by any means, electronic or mechanical, including photocopying, recording, or by
any information storage and retrieval system, without permission in writing from
IBM Watson Health.
ISBN: 978-1-57372-474-6
Introduction
Contents
Welcome to the 26th edition of the
03 Introduction Watson Health 100 Top Hospitals® study
07 2018 100 Top Hospitals from IBM Watson Health™.
award winners
13 2018 Everest Award winners For over 25 years, the 100 Top Hospitals program
19 Findings has been producing annual, quantitative studies
35 Methodology designed to shine a light on the nation’s highest
51 Appendix A performing hospitals and health systems.
53 Appendix B
55 Appendix C: The 2019 study of US hospitals began with the
Methodology details same goal that has driven each study since the
beginning of the 100 Top Hospitals program:
To identify top performers and deliver insights
that may help all healthcare organizations
better focus their improvement initiatives on
achieving consistent, balanced, and sustainable
high performance.
3
100 Top Hospitals winners
consistently set industry By finding ways to take balanced performance
benchmarks for measures to the next level, the winners of our 100 Top
like 30-day readmissions, Hospitals award are identifying opportunities to
mortality rates, patient deliver healthcare value to patients, communities,
experience, and profit and payers. The performance levels achieved
margins. by these hospitals may motivate their peers to
use data, analytics, and benchmarks to close
performance gaps.
5
Welcoming your input In addition to the major studies, customized
The 100 Top Hospitals program works to ensure analyses are also available from the 100 Top
that the measures and methodologies used in our Hospitals program, including custom benchmark
studies are fair, consistent, and meaningful. We reports. Our reports are designed to help
continually test the validity of our performance healthcare executives understand how their
measures and data sources. In addition, as part of organizational performance compares to peers
our internal performance improvement process, we within health systems, states, and markets.
welcome comments about our study from health
system, hospital, and physician executives. To 100 Top Hospitals program reports offer a
submit comments, visit 100tophospitals.com. two-dimensional view of both performance
improvement over time, applying the most
current methodologies across all years of data
Showcasing the versatility of the to produce trends, as well as the most current
100 Top Hospitals program year performance.
* To see a full list of our award winners through the years, visit https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=40019540USEN&.
7
Teaching hospitals*
Hospitals Location Medicare ID Total year(s) won
Abbott Northwestern Hospital Minneapolis, MN 240057 3
Aspirus Wausau Hospital Wausau, WI 520030 7
Brandon Regional Hospital Brandon, FL 100243 7
BSA Health System Amarillo, TX 450231 6
CHRISTUS St. Michael Health System Texarkana, TX 450801 3
Good Samaritan Hospital Cincinnati, OH 360134 6
Lakeland Medical Center St. Joseph, MI 230021 2
Mercy Hospital St. Louis St. Louis, MO 260020 7
Monmouth Medical Center Long Branch, NJ 310075 1
Morton Plant Hospital Clearwater, FL 100127 7
Mount Carmel St. Ann's Westerville, OH 360012 2
Park Nicollet Methodist Hospital St. Louis Park, MN 240053 5
Parkview Regional Medical Center Fort Wayne, IN 150021 4
PIH Health Hospital - Whittier Whittier, CA 050169 5
Riverside Medical Center Kankakee, IL 140186 10
Rose Medical Center Denver, CO 060032 12
Sentara Leigh Hospital Norfolk, VA 490046 5
Sky Ridge Medical Center Lone Tree, CO 060112 2
SSM Health St. Mary's Hospital - Madison Madison, WI 520083 6
St. Luke's Hospital Cedar Rapids, IA 160045 8
St. Mark's Hospital Salt Lake City, UT 460047 6
Sycamore Medical Center Miamisburg, OH 360239 10
UCHealth Poudre Valley Hospital Fort Collins, CO 060010 13
Utah Valley Hospital Provo, UT 460001 1
West Penn Hospital Pittsburgh, PA 390090 5
* Everest Award winners are in bold type.
9
Medium community hospitals*
Hospitals Location Medicare ID Total year(s) won
AdventHealth Wesley Chapel Wesley Chapel, FL 100319 2
Dupont Hospital Fort Wayne, IN 150150 5
East Cooper Medical Center Mt. Pleasant, SC 420089 1
East Liverpool City Hospital East Liverpool, OH 360096 2
Garden Grove Hospital Medical Center Garden Grove, CA 050230 5
IU Health North Hospital Carmel, IN 150161 2
IU Health West Hospital Avon, IN 150158 1
Logan Regional Hospital Logan, UT 460015 9
Memorial Hermann Katy Hospital Katy, TX 450847 3
Mercy Health - Clermont Hospital Batavia, OH 360236 10
Mercy Hospital Northwest Arkansas Rogers, AR 040010 1
Mercy Medical Center Cedar Rapids, IA 160079 7
Montclair Hospital Medical Center Montclair, CA 050758 4
Mountain View Hospital Payson, UT 460013 3
Northwestern Medicine Delnor Hospital Geneva, IL 140211 1
St. Luke's Anderson Campus Easton, PA 390326 1
St. Vincent's Medical Center Clay County Middleburg, FL 100321 1
UCHealth Medical Center of the Rockies Loveland, CO 060119 3
West Valley Medical Center Caldwell, ID 130014 6
Wooster Community Hospital Wooster, OH 360036 5
*Everest Award winners are in bold type.
11
This award recognizes the boards, executives, and
2019 Everest medical staff leaders who developed and executed
the strategies that drove the highest rates of
Award winners improvement, resulting in the highest performance
in the US at the end of five years.
The Watson Health 100 Top Hospitals® Everest
Award honors hospitals that have both the highest The Everest Award winners are a special group
current performance and the fastest long-term of the 100 Top Hospitals award winners that, in
improvement in the years of data analyzed. addition to achieving benchmark status for one
year, have simultaneously set national benchmarks
for the fastest long-term improvement on our
national balanced scorecard. In 2019, only 15
organizations achieved this level of performance.
13
The value of the Everest Award measures to the –– What incentives do we need to implement
healthcare industry for management to achieve the desired
Leaders facing the challenges of a rapidly changing improvement more quickly?
healthcare environment may benefit from unbiased –– Will the investments we are considering help
intelligence that provides objective insights into us achieve improvement goals?
complex organizational performance. Those
insights may also help leaders balance short- and –– Can we quantify the long- and short-term
long-term goals to drive continuous gains in increases in value our hospital has provided to
performance and value. our community?
* For full details on how the 100 Top Hospitals winners are selected, see the Methodology section of this document.
Comparison groups
Because bed size and teaching status have an
Top Everest Most improved effect on the types of patients a hospital treats
performance, Award performance, and the scope of services it provides, we assigned
current year winners five years each hospital in the study database to one of
five comparison groups according to its size and
teaching status (for definitions of each group, see
the Methodology section of this document):
–– Major teaching hospitals
––Teaching hospitals
Data sources
As with all 100 Top Hospitals studies, our –– Large community hospitals
methodology is designed to be objective, and –– Medium community hospitals
all data comes from public sources. We build a
database of short-term, acute care, nonfederal US –– Small community hospitals
hospitals that treat a broad spectrum of patients.
The primary data sources are the Medicare Provider To support evaluating hospitals fairly and
Analysis and Review (MEDPAR) patient claims data comparing them to like hospitals, we use these
set, the Centers for Medicare & Medicaid Services comparison groups for all scoring and ranking to
Hospital Compare hospital performance data set, uncover winners. For more information on how we
and the Hospital Cost Report Information System build the database, see the Methodology section.
Medicare Cost Report file. We use the most recent
five years of data available for trending and the
most current year for selection of winners*.
* Hospital inpatient mortality and complications are based on two years of data combined for each study year data point. See the Performance Measures section of this
document for details.
15
Performance measures present-on-admission (POA) data in our proprietary
Both the 100 Top Hospitals and the Everest Awards risk models. POA coding became available in the
are based on a set of measures that, taken together, 2009 MEDPAR data set.
are designed to assess balanced performance
across the organization, reflecting the leadership For the inpatient mortality and complications
effectiveness of board members, management, and (clinical measures with low frequency of
medical and nursing staff. These measures fall into occurrence), we combine two years of data for
five domains of performance: inpatient outcomes, each study year to stabilize results. This year, we
extended outcomes, operational efficiency, combined data sets as follows:
financial health, and patient experience. –– Study year 2017 = 2017 and 2016 MEDPAR
data sets
The 10 measures used to select the 2019
winners are: –– Study year 2016 = 2016 and 2015 MEDPAR
data sets
1. Risk-adjusted inpatient mortality index
–– Study year 2015 = 2015 and 2014 MEDPAR
2. Risk-adjusted complications index data sets
3. Mean healthcare-associated infection index –– Study year 2014 = 2014 and 2013 MEDPAR
4. Mean 30-day risk-adjusted mortality rate data sets
(includes acute myocardial infarction [AMI]), –– Study year 2013 = 2013 and 2012 MEDPAR
heart failure [HF], pneumonia, chronic data sets
obstructive pulmonary disease [COPD],
and stroke) For specific data periods used for each measure,
5. Mean 30-day risk-adjusted readmission rate see page 47 of the Methodology section.
(includes AMI, HF, pneumonia, THA/TKA,
COPD, and stroke)
6. Severity-adjusted average length of stay
7. Mean emergency department throughput
(in minutes)
8. Case mix- and wage-adjusted inpatient
expense per discharge
9. Adjusted operating profit margin
10. Hospital Consumer Assessment of
Healthcare Providers and Systems score
(overall hospital performance)
17
–– Over 155,000 fewer discharged patients
Findings would be readmitted within 30 days
–– Patients would spend 17 minutes less in
The Watson Health 100 Top Hospitals® study hospital emergency rooms per visit
shines a light on the top-performing hospitals in
the country. According to publicly available data We based this analysis on the Medicare patients
and our transparent methodologies, these industry included in this study. If the same standards were
leaders appear to have successfully negotiated applied to all inpatients, the impact would be
the fine line between running highly effective even greater.
operations and being innovative and forward-
thinking in ways that grow their organizations over Note: All currency amounts listed in this 100 Top
the short and long term. Hospitals study are in US dollars.
–– Over 38,000 additional patients could be –– Overall, the winners had 24% fewer deaths
complication-free than expected (0.76 index), considering patient
severity, while their nonwinning peers had 1%
–– Over $8.2 billion in inpatient costs could more deaths than would be expected (1.01
be saved index) (Table 1)
––The typical patient could be released from –– Small community hospitals had the most
the hospital a half day sooner and would dramatic difference between winners and
have 12% fewer expenses related to the nonwinners; the winning small hospital median
complete episode of care than the median mortality rate was 47% lower than nonwinning
patient in the US peers (Table 6)
* Risk-adjusted measures are normalized by comparison group, so results cannot be compared across comparison groups.
19
–– Medium-sized community hospitals also had –– Overall, nationally, there were 35% fewer
a significantly lower median mortality index infections than expected at winning hospitals
values than nonwinning peer hospitals, with a (0.65 standardized infection ratio [SIR]
29.5% lower mortality index (Table 5) median), compared to 19% fewer infections
at peer nonwinning hospitals (0.81 SIR
100 Top Hospitals had fewer patient complications* median)*** (Table 1)
–– Overall, patients at the winning hospitals had –– On the HAI composite index, medium
23% fewer complications than expected (0.77 community hospitals showed the widest
index), considering patient severity, while difference between winning benchmark
their nonwinning peers had only 5% fewer hospital performance and nonwinners,
complications than expected (0.95 index)*** with the winning median HAI composite
(Table 1) index 30% lower than the median value of
nonwinners (0.51 and 0.73 median SIR values,
–– For complications, as with inpatient mortality,
respectively) (Table 5)
small community hospitals had the most
dramatic difference between winners and ––The winners among major teaching hospitals
nonwinners; the winning small hospital median had 19% fewer infections than expected (0.81
observed-to-expected ratio of complications SIR median), while their nonwinning major
was 41.5% lower than nonwinning peers’ index teaching peers had only 7% fewer infections
value (0.54 versus 0.92) (Table 6) than expected (0.93 SIR median) (Table 2)
100 Top Hospitals had fewer healthcare- 100 Top Hospitals had lower 30-day mortality
associated infections and readmission rates
Healthcare-associated infections (HAIs)**, Several patient groups are included in the 30-
captures information about the quality of inpatient day mortality and readmission extended care
care. Based on nation-wide data availability, we composite metrics. The mean 30-day mortality
built a composite measure of HAI performance rate includes heart attack (AMI), heart failure (HF),
at the hospital level, considering up to six HAIs, pneumonia, chronic obstructive pulmonary disease
depending on assigned comparison group. (The (COPD), and stroke patient groups. The mean 30-
HAI measure is not ranked for small community day readmission rate includes AMI, HF, pneumonia,
hospitals in the 2019 study.) The six reported HAIs total hip arthroplasty and/or total knee arthroplasty
are: methicillin-resistant staphylococcus aureus (THA/TKA), COPD, and stroke patient groups.
(MRSA-bloodstream), central line-associated blood
–– Mean 30-day mortality and readmission rates
stream infections, catheter-associated urinary tract
were lower at the winning hospitals than
infections, clostridium difficile (C.diff), surgical site
nonwinning hospitals, across all comparison
infections (SSIs) following colon surgery, and SSIs
groups (by 0.6 and 0.4 percentage points,
following an abdominal hysterectomy.
respectively) (Table 1)
* Risk-adjusted measures are normalized by comparison group, so results cannot be compared across comparison groups.
** As developed by the National Healthcare Safety Network and reported by the Centers for Medicare & Medicaid Services (CMS) in the public Hospital Compare data set.
*** Mortality, complications and HAI index values are calculated using a subset of hospitals from which the measures are developed, which is why there will be instances
where both peer and bench indexes are below 1.0 .
* Risk-adjusted measures are normalized by comparison group, so results cannot be compared across comparison groups.
** Includes median time from ED arrival to ED departure for admitted patients and median time from ED arrival to ED departure for non-admitted patients.
21
100 Top Hospitals were more profitable
–– Overall, winning hospitals had a median
operating profit margin that was 11.9
percentage points higher than nonwinning
hospitals (15.6% versus 3.8%) (Table 1)
–– Profitability difference was the most dramatic
in the medium community hospital group,
where winners had operating profit margins
that were 17.1 percentage points higher than
nonwinners (Table 5)
–– Medium hospital winners also had the largest
median operating profit margin of any winning
group at 21.5% (Table 5)
–– In contrast, small community hospital winners
had the lowest median operating profit margin
of any winning group at 12.9% (Table 6)
1
Mortality, complications and average length of stay based on Present on Admission (POA)-enabled risk models applied to MedPAR 2016 and 2017 data (ALOS 2017 only).
2
Healthcare-Associated Infections (HAI) data from CMS Hospital Compare Jan 1, 2017 - Dec 31, 2017 data set (excluding Small Community Hospitals).
3
30-day rates from CMS Hospital Compare July 1, 2014-June 30, 2017 data set.
4
ED measure and HCAHPS data from CMS Hospital Compare Jan 1, 2017-Dec 31, 2017 data set.
5
Inpatient expense and operating profit margin data from CMS Hospital Cost Report Information System (HCRIS) data file, 2017.
6
We do not calculate percent difference for this measure because it is already a percent value.
23
Table 2. Major teaching hospital performance comparisons
Domain Performance measure Medians Benchmark compared with peer group
Benchmark Peer hospitals Difference Percent How winning benchmark
hospitals (nonwinners) difference hospitals outperformed
(winners) nonwinning peer hospitals
Clinical Inpatient Mortality Index1 0.82 1.01 -0.19 -19.0% Lower mortality
Outcomes
Complications Index 1
0.95 1.03 -0.08 -8.0% Fewer complications
HAI Index 2
0.81 0.93 -0.1 -13.9% Fewer infections
Extended 30-Day Mortality Rate3 11.7 12.2 -0.6 n/a6 Lower 30-day mortality
Outcomes
30-Day Readmission Rate3 14.6 15.4 -0.7 n/a6 Fewer 30-day readmissions
Operational Average Length of Stay 1
4.3 4.9 -0.6 -13.0% Shorter stays
Efficiency
ED Throughput Measure 4
246.5 315.8 -69.3 -21.9% Less time to service
Inpatient Expense per Discharge 5
$6,761 $8,027 -$1,267 -15.8% Lower inpatient cost
Financial Operating Profit Margin 5
13.1 2.6 10.6 n/a 6
Higher profitability
Health
Patient HCAHPS Score4 270.0 263.0 7.0 2.7% Better patient experience
Experience
1
Mortality, complications and average length of stay based on Present on Admission (POA)-enabled risk models applied to MedPAR 2016 and 2017 data (ALOS 2017 only).
2
Healthcare-Associated Infections (HAI) data from CMS Hospital Compare Jan 1, 2017 - Dec 31, 2017 data set (excluding Small Community Hospitals).
3
30-day rates from CMS Hospital Compare July 1, 2014-June 30, 2017 data set.
4
ED measure and HCAHPS data from CMS Hospital Compare Jan 1, 2017-Dec 31, 2017 data set.
5
Inpatient expense and operating profit margin data from CMS Hospital Cost Report Information System (HCRIS) data file, 2017.
6
We do not calculate percent difference for this measure because it is already a percent value.
1
Mortality, complications and average length of stay based on Present on Admission (POA)-enabled risk models applied to MedPAR 2016 and 2017 data (ALOS 2017 only).
2
Healthcare-Associated Infections (HAI) data from CMS Hospital Compare Jan 1, 2017 - Dec 31, 2017 data set (excluding Small Community Hospitals).
3
30-day rates from CMS Hospital Compare July 1, 2014-June 30, 2017 data set.
4
ED measure and HCAHPS data from CMS Hospital Compare Jan 1, 2017-Dec 31, 2017 data set.
5
Inpatient expense and operating profit margin data from CMS Hospital Cost Report Information System (HCRIS) data file, 2017.
6
We do not calculate percent difference for this measure because it is already a percent value.
1
Mortality, complications and average length of stay based on Present on Admission (POA)-enabled risk models applied to MedPAR 2016 and 2017 data (ALOS 2017 only).
2
Healthcare-Associated Infections (HAI) data from CMS Hospital Compare Jan 1, 2017 - Dec 31, 2017 data set (excluding Small Community Hospitals).
3
30-day rates from CMS Hospital Compare July 1, 2014-June 30, 2017 data set.
4
ED measure and HCAHPS data from CMS Hospital Compare Jan 1, 2017-Dec 31, 2017 data set.
5
Inpatient expense and operating profit margin data from CMS Hospital Cost Report Information System (HCRIS) data file, 2017.
6
We do not calculate percent difference for this measure because it is already a percent value.
1
Mortality, complications and average length of stay based on Present on Admission (POA)-enabled risk models applied to MedPAR 2016 and 2017 data (ALOS 2017 only).
2
Healthcare-Associated Infections (HAI) data from CMS Hospital Compare Jan 1, 2017 - Dec 31, 2017 data set (excluding Small Community Hospitals).
3
30-day rates from CMS Hospital Compare July 1, 2014-June 30, 2017 data set.
4
ED measure and HCAHPS data from CMS Hospital Compare Jan 1, 2017-Dec 31, 2017 data set.
5
Inpatient expense and operating profit margin data from CMS Hospital Cost Report Information System (HCRIS) data file, 2017.
6
We do not calculate percent difference for this measure because it is already a percent value.
25
Table 6. Small community hospital comparisons
Domain Performance measure Medians Benchmark compared with peer group
Benchmark Peer hospitals Difference Percent How winning benchmark
hospitals (nonwinners) difference hospitals outperformed
(winners) nonwinning peer hospitals
Clinical Inpatient Mortality Index1 0.53 1.01 -0.47 -47.2% Lower mortality
Outcomes
Complications Index 1
0.54 0.92 -0.38 -41.5% Fewer complications
HAI Index 2
n/a n/a n/a n/a n/a
Extended 30-Day Mortality Rate3 12.1 12.7 -0.6 n/a6 Lower 30-day mortality
Outcomes
30-Day Readmission Rate3 14.3 14.7 -0.3 n/a6 Fewer 30-day readmissions
Operational Average Length of Stay 1
4.2 4.9 -0.7 -13.6% Shorter stays
Efficiency
ED Throughput Measure 4
163.0 182.5 -19.5 -10.7% Less time to service
Inpatient Expense per Discharge 5
$6,039 $7,379 -$1,340 -18.2% Lower inpatient cost
Financial Operating Profit Margin 5
12.9 1.8 11.0 n/a 6
Higher profitability
Health
Patient HCAHPS Score4 273.0 265.0 8.0 3.0% Better patient experience
Experience
1
Mortality, complications and average length of stay based on Present on Admission (POA)-enabled risk models applied to MedPAR 2016 and 2017 data (ALOS 2017 only).
2
Healthcare-Associated Infections (HAI) data from CMS Hospital Compare Jan 1, 2017 - Dec 31, 2017 data set (excluding Small Community Hospitals).
3
30-day rates from CMS Hospital Compare July 1, 2014-June 30, 2017 data set.
4
ED measure and HCAHPS data from CMS Hospital Compare Jan 1, 2017-Dec 31, 2017 data set.
5
Inpatient expense and operating profit margin data from CMS Hospital Cost Report Information System (HCRIS) data file, 2017.
6
We do not calculate percent difference for this measure because it is already a percent value.
To produce this data, we calculated the 100 Top ––The Northeast continues to show the poorest
Hospitals measures at the state level*, ranked each performance overall, by a large margin in both
measure, then weighted and summed the ranks to years, with 66.7% of its states in the bottom
produce an overall state performance score. States two quintiles in 2019 and 77.8% in 2018
were ranked from best to worst on the overall score, ––The South continues to show the same pattern
and the results are reported as rank quintiles. as last year with the majority of its states in
the bottom two quintiles (47.1% in 2019 and
52.9% in 2018)
* Each state measure is calculated from the acute care hospital data for that state (short-term, general acute care hospitals; critical access hospitals; and cardiac, orthopedic,
and women’s hospitals) with valid data for the included measures. Inpatient mortality, complications, and average LOS are aggregated from MEDPAR patient record data. HAIs,
30-day mortality rates, and 30-day readmission rates are aggregated from the numerator and denominator data for each hospital. Inpatient expense per discharge, operating
profit margin, and HCAHPS scores are hospital values weighted by the number of acute discharges at each hospital. Mean ED throughput is calculated by averaging the median
minutes of member hospitals to produce the unweighted mean minutes for each ED measure, then averaging the two ED measures to produce the state-level unweighted ED
throughput measure. For expense, profit, and HCAHPS, a mean weighted value is calculated for each state by summing the weighted hospital values and dividing by the sum of the
weights. To calculate the state overall score, individual measure ranks are weighted, using the same measure rank weights as in the 100 Top Hospitals study, then summed.
WA
ME
MT
ND
MN VT
OR NH
MA
ID WI NY
SD RI
MI CT
WY
PA
IA NJ
NE
OH MD
NV IN DE
UT IL
WV
CO VA
DC (red)
KS MO
CA KY
NC
TN
AZ OK
SC
NM AR
MS AL GA
100 Top Hospitals performance
2019 study state-level rankings
TX LA
Quintile 1 - Best
FL
Quintile 2
HI
Quintile 3
AK
Quintile 4
Quintile 5 - Worst
State data note: The 2019 state findings were based on the 100 Top Hospitals measure methodologies, using 2016 and 2017 MEDPAR data
(combined) for inpatient mortality and complications; July 1, 2014- June 30, 2017, for 30-day rates, and 2017 data for all other measures.
27
Figure 2. State-level performance comparisons, 2018 study
WA
ME
MT
ND
MN VT
OR NH
MA
ID WI NY
SD RI
MI CT
WY
PA
IA
NJ
NE
OH
NV IN DE
UT IL
WV MD
CO VA
DC (red)
KS MO
CA KY
NC
TN
AZ OK
SC
NM AR
MS AL GA
100 Top Hospitals performance
2019 study state-level rankings
TX LA
Quintile 1 - Best
FL
Quintile 2
HI
Quintile 3
AK
Quintile 4
Quintile 5 - Worst
State data note: The 2018 state findings were based on the 100 Top Hospitals measure methodologies, using 2015 and 2016 MEDPAR data
(combined) for inpatient mortality and complications; July 1, 2013- June 30, 2016, for 30-day rates, and 2016 data for all other measures.
29
Performance improvement over time:
All hospitals
By studying the direction of performance change of
all hospitals in our study (winners and nonwinners),
we can see that US hospitals have not been able
to improve performance much across the entire
balanced scorecard of performance measures
(Table 8).
Table 8. Direction of performance change for all hospitals in study, 2013 - 2017
Performance measure Significantly improving No statistically significant Significantly declining
performance change in performance performance
Count of Percentage of Count of Percentage of Count of Percentage of
hospitals1 hospitals2 hospitals1 hospitals2 hospitalsa1 hospitals2
1. Count refers to the number of in-study hospitals whose performance fell into the highlighted category on the measure.
Note: Total number of hospitals included in the analysis will vary by measure due to exclusion of interquartile range outlier data points.
Inpatient expense and profit are affected. Some in-study hospitals had too few data points remaining to calculate trend.
2. Percent is of total in-study hospitals across all peer groups.
31
conditions: three years, combined (July 1, 2014 -
June 30, 2017). 90-day episode-of-care payment measure
Another measure recently made available in the
Excess days in acute care measures Hospital Compare data set is the 90-day episode-
The newest set of measures available from CMS of-care payment metric for primary, elective
in the Hospital Compare data set are the EDAC THA/TKA. Like the other 30-day episode-of-
measures for AMI and HF, and just released this care payment measures, CMS calculates risk-
year, Pneumonia. CMS defines “excess days” as standardized payments associated with a 90-day
the difference between a hospital’s average days in episode of care, compared to an “average” hospital
acute care and expected days, based on an average nationally. The measure summarizes payments
hospital nationally. Days in acute care include for patients across multiple care settings, services,
days spent in an ED, a hospital observation unit, and supplies during the 90-day period, which
or a hospital inpatient unit for 30 days following starts on the day of admission. The data period for
a hospitalization. The data period in our study for this measure combines three years, April 1, 2014 -
these measures is the same as for the other 30-day March 31, 2017.
metrics for specific patient conditions: three years,
combined (July 1, 2014 - June 30, 2017).
Mammography Follow-up Rate1 7.9 7.9 0.0 n/a3 fewer follow up procedures
Abdomen CT Use of Contrast Material Rate 1
5.5 5.9 -0.4 n/a3
fewer double scans
Thorax CT Use of Contrast Material Rate1 0.5 0.6 -0.1 n/a3 fewer double scans
Appropriate Care for Sepsis Percent2 54.0 49.0 5.0 n/a3 greater care compliance
1. Outpatient measures from CMS Hospital Compare July 1, 2016 - June 30, 2017 data set.
2. Core measures from CMS Hospital Compare Jan 1, 2017 - Dec 31, 2017 data set.
3. We do not calculate percent difference for this measure because it is already a percent value.
30-Day Hospital-Wide Readmission Rate1 14.9 15.3 -0.4 n/a5 fewer 30-day readmissions
30-Day AMI Episode Payment2 $23,726 $23,890 -164.5 -0.7% lower episode cost
30-Day Heart Failure Episode Payment2 $16,815 $16,604 $211 1.3% higher episode cost
30-Day Pneumonia Episode Payment 2
$17,874 $17,502 $372 2.1% higher episode cost
90-Day THA/TKA Episode Payment 3
$21,454 $21,732 -$278 -1.3% lower episode cost
90-Day THA/TKA Complications Rate 3
2.4 2.6 -0.2 n/a5
fewer complications
30-Day AMI Excess Days in Acute Care 4
-5.8 5.4 -11.2 -206% fewer days in acute care
30-Day Heart Failure Excess Days in Acute Care 4
-9.1 5.9 -15.0 -254% fewer days in acute care
30-Day Pneumonia Excess Days in Acute Care4 -6.0 7.2 -13.2 -183% fewer days in acute care
1. 30-Day hospital-wide readmission rate from CMS Hospital compare July 1, 2016 - June 30, 2017 data set.
2. 30-day episode payment metrics from CMS Hospital Compare July 1, 2014 - June 30, 2017 data set.
3. 90-Day THA/TKA payment and complication rate from CMS Hospital Compare April 1, 2014 - March 31, 2017 data set
4. 30-Day excess days in acute care metrics from CMS Hospital Compare July 1, 2014 - June 30, 2017 data set.
5. We do not calculate percent difference for this measure because it is already a percent value.
33
–– Benchmark CAHs were also strong in risk-
adjusted inpatient mortality, with 56% fewer
deaths than expected, compared to peer
hospitals with as many deaths as expected
(median index values of 0.44 and 1.00,
respectively)
–– For risk-adjusted complications, benchmark
CAHs had an index value 43.9% lower than
peers, while both had median values below 1,
reflecting fewer complications than expected
–– Average LOS was also 19.3% shorter at
benchmark CAHs, where patients left the
hospital almost a full day sooner than in
peer hospitals (2.8 days versus 3.5 days,
respectively)
–– Pneumonia 30-day rates were better at
benchmark facilities; the biggest difference
found was in the 30-day mortality rate (14.9%
versus 15.8%)
1. Mortality, complications and average length of stay based on Present on Admission (POA)-enabled risk models applied to MedPAR 2016 and 2017 data (ALOS 2017 only).
2. 30-day rates from CMS Hospital Compare July 1, 2014-June 30, 2017 data set.
3. Operating profit margin data from CMS Hospital Cost Report Information System (HCRIS) data file, 2017.
4. We do not calculate percent difference for these measures because they are already a percent value.
This 100 Top Hospitals study includes only short- We use MEDPAR patient-level demographic,
term, nonfederal, acute care US hospitals that treat diagnosis, and procedure information to calculate
a broad spectrum of patients. inpatient mortality, complications, and length
of stay (LOS). The MEDPAR data set contains
The main steps we take in selecting the 100 Top information on the approximately 15 million
Hospitals are: Medicare patients discharged annually from US
–– Building the database of hospitals, including acute care hospitals. In this study, we used the
special selection and exclusion criteria most recent two federal fiscal years of MEDPAR
data available (2016 and 2017), which include
–– Classifying hospitals into comparison groups Medicare Advantage (HMO) encounters*, to identify
by size and teaching status current performance and to select the winning
–– Scoring hospitals on a balanced scorecard of hospitals. To be included in the study, a hospital
10 performance measures across five domains must have the two most current years of data
available, with valid present-on-admission (POA)
–– Determining 100 Top Hospitals by ranking coding. Hospitals that file Medicare claims jointly
hospitals relative to their comparison groups with other hospitals under one provider number
were analyzed as one organization. Six years of
The following section is intended to be an overview MEDPAR data were used to develop the study trend
of these steps. To request more detailed information database (2012-2017).
on any of the study methodologies outlined here,
email us at 100tophospitals@us.ibm.com or call The 100 Top Hospitals program has used the
800-525-9083. MEDPAR database for many years. We believe it to
be an accurate and reliable source for the types of
high-level analyses performed in this study.
* The MEDPAR data years quoted in 100 Top Hospitals research are federal fiscal years (FFYs), a year that begins on October 1 of each calendar year and ends on September 30
of the following calendar year. FFYs are identified by the year in which they end (for example, FFY 2017 begins October 1, 2016, and ends September 30, 2017). Data for all CMS
Hospital Compare measures is provided in calendar years, except the 30-day rates. CMS publishes the 30-day rates as three-year combined data values. We label these data
points based on the end date of each data set. For example, July 1, 2014 - June 30, 2017, is named “2017.”
35
Note: To identify the Everest Award winners, we Note: Due to the lack of updated data for Medicare
also reviewed the most recent five years of data, Spend per Beneficiary (MSPB) in the CMS Hospital
2013 through 2017, to study the rate of change Compare data set, that measure was dropped from
in performance through the years. To read more the ranked metrics this year. However, last year’s
about the Everest Award methodology, see the performance and improvement graphs will be
special Everest Award section of this document. published in the reports for informational purposes.
For specific data sources for each performance
measure, see the table on page 47. We also used residency program information
to classify hospitals. This comes from the
We use Medicare Cost Reports to create our 100 Accreditation Council for Graduate Medical
Top Hospitals database, which contains hospital- Education (ACGME) and the American Osteopathic
specific demographic information and hospital- Association (AOA)*.
specific, all-payer revenue and expense data. The
Medicare Cost Report is filed annually by every US Risk- and severity-adjustment models
hospital that participates in the Medicare program. The IBM Watson Health™ proprietary risk- and
Hospitals are required to submit cost reports to severity-adjustment models for inpatient mortality,
receive reimbursement from Medicare. It should complications, and LOS have been recalibrated for
be noted that the Medicare Cost Report includes this study release using FFY 2015 data available in
all hospital costs, not just costs associated with the all-payer Watson Health’s Projected Inpatient
Medicare beneficiaries. Database (PIDB). The PIDB is one of the largest
US inpatient, all- payer databases of its kind,
The Medicare Cost Report promotes comparability containing approximately 23 million inpatient
of costs and efficiency among hospitals in reporting. discharges annually, obtained from approximately
We used hospital 2017 cost reports published in 5,000 hospitals, which comprise more than 65%
the federal Healthcare Cost Report Information of the nonfederal US market. Watson Health risk-
System (HCRIS) 2018 third-quarter data set for and severity-adjustment models take advantage of
this study. If we did not have a complete 2017 cost available POA coding that is reported in all-payer
report for a hospital, we excluded the hospital from data. Only patient conditions that are present on
the study. admission are used to determine the probability of
death, complications, or the expected LOS.
In this study, we used CMS Hospital Compare data
sets published in the third quarter of 2018 for The recalibrated models were used in producing
healthcare-associated infection (HAI) measures, the risk-adjusted inpatient mortality and
30-day mortality rates, 30-day readmission rates, complications indexes, based on two years of
emergency department (ED) throughput measures, MEDPAR data (2016 and 2017). The severity-
and Hospital Consumer Assessment of Healthcare adjusted LOS was produced based on MEDPAR
Providers and Systems (HCAHPS) patient 2017 data.
experience-of-care data. We used the 2017 data
point to identify current performance and to select
the winning hospitals. Five data points,
2013 through 2017, were used to develop the
study trend database.
* We obtain AMA graduate medical education program data directly from the ACGME. This year’s study is based on the ACGME files for 2016/2017 hospital residency programs.
AOA residency information is collected from the AOA website (opportunities.osteopathic.org). In addition, we consult online information about graduate medical education
programs from the Fellowship and Residency Electronic Interactive Database Access (FREIDA) and hospital websites to confirm program participation.
* In the 2019 study, critical access hospitals (CAHs) that had valid data for six measures were included in a separate analysis to provide national benchmark performance
comparisons for them. See page 33 for details on the CAH analysis.
37
Classifying hospitals into comparison groups Major teaching hospitals
Bed size, teaching status, and extent of residency/ There are three ways to qualify:
fellowship program involvement can affect 1. 400 or more acute care beds in service, plus a
the types of patients a hospital treats and the resident*-per-bed ratio of at least 0.25, plus
scope of services it provides. When analyzing
the performance of an individual hospital, it is –– Sponsorship of at least 10 GME programs, or
important to evaluate it against other similar –– Involvement in at least 20 programs overall
hospitals. To address this, we assigned each
hospital to one of five comparison groups, 2. Involvement in at least 30 GME programs
according to its size and teaching status. overall (regardless of bed size or resident*-
per-bed ratio)
Our classification methodology draws a distinction 3. A resident*-per-bed ratio of at least 0.60
between major teaching hospitals and teaching (regardless of bed size or GME program
hospitals by reviewing the number and type of involvement)
teaching programs, and by accounting for level of
involvement in physician education and research Teaching hospitals
through evidence of program sponsorship versus
simple participation. This methodology de- –– 200 or more acute care beds in service, and
emphasizes the role of bed size and focuses more –– Either a resident*-per-bed ratio of at least
on teaching program involvement. Using this 0.03 or involvement in at least three GME
approach, we seek to measure both the depth and programs overall
breadth of teaching involvement and recognize
teaching hospitals’ tendencies to reduce beds and Large community hospitals
concentrate on tertiary care.
–– 250 or more acute care beds in service, and
Our formula for defining the teaching comparison –– Not classified as a teaching hospital per
groups includes each hospital’s bed size, definitions above
residents*-to-acute-care-beds ratio, and
involvement in graduate medical education (GME) Medium community hospitals
programs accredited by either the ACGME or the
AOA. The definition includes both the number of –– 100 to 249 acute care beds in service, and
programs and type (sponsorship or participation) –– Not classified as a teaching hospital per
of GME program involvement. In this study, AOA definitions above
residency program involvement is treated as being
equivalent to ACGME program sponsorship. Small community hospitals
The five comparison groups and their parameters –– 25 to 99 acute care beds in service, and
are as follows: –– Not classified as a teaching hospital per
definitions above
* We include interns, residents, and fellows reported in full-time employees (FTEs) on the hospital cost report.
As the healthcare industry has changed, our Following is the rationale for the selection of our
methods have evolved. Our current measures are balanced scorecard domains and the measures
centered on five main components of hospital used for each.
performance: inpatient outcomes, extended
outcomes, operational efficiency, financial health, Inpatient outcomes
and patient experience. Our measures of inpatient outcomes include
three measures: risk-adjusted mortality index,
The 10 measures included in the 2019 study, by risk-adjusted complications index, and mean
performance domain, are: healthcare-associated infection index. These
measures show us how the hospital is performing
Inpatient outcomes on what we consider to be the most basic and
1. Risk-adjusted inpatient mortality index essential care standards (survival, error-free care
and avoidance of infections) while treating patients
2. Risk-adjusted complications index in the hospital.
3. Mean HAI index
Extended outcomes
Extended outcomes The extended outcomes measures (30-day
mortality rates for AMI, HF, pneumonia, COPD, and
4. Mean 30-day risk-adjusted mortality rate stroke patients; and 30-day readmission rates for
(includes acute myocardial infarction [AMI], AMI, HF, pneumonia, THA/TKA, COPD, and stroke
heart failure [HF], pneumonia, chronic patients) help us understand how the hospital’s
obstructive pulmonary disease [COPD], patients are faring over a longer period29. These
and stroke) measures are part of the CMS Hospital Value-
5. Mean 30-day risk-adjusted readmission Based Purchasing Program and are reported upon
rate (includes AMI, HF, pneumonia, total hip widely in the industry. Hospitals with lower values
and knee arthroplasty [THA/TKA], COPD, appear to be providing or coordinating the care
and stroke) continuum with better medium-term results for
these conditions.
39
As hospitals become more interested in contracting data (calendar year 2017) for this measure.
for population health management, we believe that Instead of using last year’s data (CY 2016), we
understanding outcomes beyond the walls of the opted to drop it from the ranking, but still provide
acute care setting is imperative. We are committed the performance and improvement graphs for
to adding new metrics that assess performance informational purposes in the report, using last
along the continuum of care as they become year’s data.
publicly available.
Financial health
Operational efficiency Currently, we have one measure of hospital
The operational efficiency domain includes financial health: adjusted operating profit margin.
severity-adjusted average LOS, ED throughput, The operating profit margin is a measure of
and inpatient expense per discharge. Average management’s ability to operate within current
LOS serves as a proxy for clinical efficiency in financial constraints and provides an indicator of
an inpatient setting, while the ED throughput the hospital’s financial health. We adjust operating
measures focus on process efficiency profit margin for net related organization expense,
in one of the most important access points to as reported on the hospital cost report, to provide a
hospital care. more accurate measure of a hospital’s profitability.
See Appendix C for details on the calculation of
Average LOS is adjusted to increase the validity of this measure.
comparisons across the hospital industry. We use
a Watson Health proprietary severity-adjustment Previous studies included measures of hospital
model to determine expected LOS at the patient liquidity and asset management. We retired these
level. Patient-level observed and expected LOS measures as more and more hospitals became
values are used to calculate the hospital-level, part of a health systems. Health system accounting
severity-adjusted, average LOS. practices often recognize hospitals as units of
the system, with no cash or investment assets of
For ED throughput, we use the mean of the their own. Moreover, hospitals in health systems
reported median minutes for two critical processes: are often reported as having no debt in their own
median time from ED arrival to ED departure for name. Using public data, there is no effective way
admitted patients, and median time from ED arrival to accurately measure liquidity or other balance
to ED departure for non-admitted patients. sheet-related measures of financial health.
Performance measures
41
Risk-adjusted complications index
Why we include this element Calculation Comments Favorable
values are
Keeping patients free from potentially We calculate an index value based on We rank hospitals on the difference Lower
avoidable complications is an important the number of cases with complications between the observed and expected
goal for all healthcare providers. A lower (for this study, in 2016 and 2017), number of patients with complications,
complications index indicates fewer divided by the number expected, given expressed in normalized standard
patients with complications, considering the risk of complications for each deviation units (z-score). We used two
what would be expected based on patient. We use our proprietary expected years of MEDPAR data (for this study,
patient characteristics. Like the mortality complications risk index models to 2016 and 2017) to reduce the influence
index, this measure can show where determine expected complications. of chance fluctuation.
complications did not occur but were These models account for patient-level
expected, or the reverse, given the characteristics (age, sex, principal The MEDPAR data set includes both
patient’s condition. diagnosis, comorbid conditions, and Medicare fee-for-service claims
other characteristics). Complication and Medicare Advantage (HMO)
rates are calculated from normative encounter records.
data for two patient risk groups:
medical and surgical. We normalize the Hospitals with observed values
expected value based on the observed statistically worse than expected
and expected complications for each (99% confidence), and whose values
comparison group. are above the high trim point (75th
percentile of statistical outliers), are
POA coding is used in the risk model not eligible to be named benchmark
to identify pre-existing conditions hospitals.
for accurate assessment of patient
severity and to distinguish them
from complications occurring during
hospitalization. For more details, see
Appendix C.
Mean 30-day risk-adjusted mortality rate (AMI, HF, pneumonia, COPD, and stroke patients)
Why we include this element Calculation Comments Favorable
values are
30-day mortality rates are a widely Data is from the CMS Hospital Compare We rank hospitals by comparison group, Lower
accepted measure of the effectiveness data set. CMS calculates a 30-day based on the mean rate for included
of hospital care. They allow us to look mortality rate (all-cause deaths within 30-day mortality measures (AMI, HF,
beyond immediate inpatient outcomes 30 days of admission, per 100 patients) pneumonia, COPD, and stroke).
and understand how the care the for each patient condition using three
hospital provided to inpatients with years of MEDPAR data, combined. For The CMS Hospital Compare data for
these conditions may have contributed this study, we included data for the July 30-day mortality is based on Medicare
to their longer-term survival. Because 1, 2014, through June 30, 2017, data fee-for-service claims only. For more
these measures are part of the CMS set. CMS does not calculate rates for information, see Appendix C.
Hospital Value-Based Purchasing hospitals where the number of cases
Program, they are being watched is too small (less than 25). In these
closely in the industry. In addition, cases, we substitute the comparison
tracking these measures may help group-specific median rate for the
hospitals identify patients at risk for affected 30-day mortality measure. For
post-discharge problems and target more information about this data, see
improvements in discharge planning and Appendix C.
aftercare processes. Hospitals that score
well may be better prepared for a pay- We calculate the arithmetic mean of the
for-performance structure. included 30-day mortality rates (AMI,
HF, pneumonia, COPD, and stroke).
43
Mean 30-day risk-adjusted readmission rate (AMI, HF, pneumonia, THA/TKA, COPD, and stroke patients)
Why we include this element Calculation Comments Favorable
values are
30-day readmission rates are a widely Data is from the CMS Hospital We rank hospitals by comparison group, Lower
accepted measure of the effectiveness Compare data set. CMS calculates a based on the mean rate for included
of hospital care. They allow us to 30-day readmission rate (all-cause 30-day readmission measures
understand how the care the hospital readmissions within 30 days of (AMI, HF, pneumonia, THA/TKA, COPD,
provided to inpatients with these discharge, per 100 patients) for each and stroke).
conditions may have contributed to patient condition using three years
issues with their post-discharge medical of MEDPAR data, combined. For this
stability and recovery. study, we included data for the July
1, 2014, through June 30, 2017, data
These measures are being watched set. CMS does not calculate rates for
closely in the industry. Tracking these hospitals where the number of cases is
measures may help hospitals identify too small (less than 25). In these cases,
patients at risk for post-discharge we substitute the comparison group-
problems if discharged too soon, as specific median rate for the affected
well as target improvements in 30-day readmission measure. For
discharge planning and aftercare more information about this data, see
processes. Hospitals that score well Appendix C.
may be better prepared for a pay-for-
performance structure. We calculate the arithmetic mean of
the included 30-day readmission rates
(AMI, HF, pneumonia, THA/TKA, COPD,
and stroke).
45
Adjusted operating profit margin
Why we include this element Calculation Comments Favorable
values are
Operating profit margin is one of the This measure uses Medicare Cost We adjust hospital operating expense Higher
most straightforward measures of Report data for hospital cost reports (for for net related organization expense to
a hospital’s financial health. It is a this study, reports ending in calendar obtain a true picture of the operating
measure of the amount of income a year 2017). We calculate the adjusted costs. Net related organization expense
hospital is taking in versus its expenses. operating profit margin by determining includes the net of costs covered
the difference between a hospital’s total by the hospital on behalf of another
operating revenue and total operating organization and costs covered by
expense, expressed as a percentage another organization on behalf of
of its total operating revenue, adjusted the hospital.
for net related organization expense.
Total operating revenue is the sum of We rank hospitals on their adjusted
net patient revenue plus other operating operating profit margin.
revenue. Total operating expense is
the sum of operating expense and net Hospitals with extreme outlier values
related organization expense. for this measure are not eligible to be
named benchmark hospitals.
See Appendix C for detailed calculations
and the Medicare Cost Report locations
(worksheet, line, and column) for each
calculation element.
Hospital Consumer Assessment of Healthcare Providers and Systems score (overall hospital rating)
Why we include this element Calculation Comments Favorable
values are
We believe that including a measure of Data is from the CMS Hospital Compare We rank hospitals based on the Higher
patient assessment/perception of care data set. For this study, we included weighted percent sum or HCAHPS score.
is crucial to the balanced scorecard the HCAHPS results for calendar year The highest possible HCAHPS score is
concept. How patients perceive the care 2017. We use the HCAHPS survey 300 (100% of patients rate the hospital
a hospital provides has a direct effect instrument question, “How do patients high). The lowest HCAHPS score is 100
on its ability to remain competitive in rate the hospital, overall?” to score (100% of patients rate the hospital low).
the marketplace. hospitals. Patient responses fall into
three categories, and the number of See Appendix C for full details.
patients in each category is reported as
a percent: HCAHPS data is survey data, based on
either a sample of hospital inpatients or
–– Patients who gave a rating of 6 or
all inpatients. The data set contains the
lower (low)
question scoring of survey respondents.
–– Patients who gave a rating of 7 or 8
(medium)
–– Patients who gave a rating of 9 or 10
(high)
* Two years of data is combined for each study year data point.
** The HAI measure is not included in the small community hospital group ranked metrics.
47
Determining the 100 Top Hospitals A hospital is winner-excluded if both of the
Eliminating outliers following conditions apply:
Within each of the five hospital comparison groups, 1. Observed value is higher than expected and
we rank hospitals based on their performance on the difference is statistically significant with
each of the measures relative to other hospitals in 99% confidence. When a hospital’s observed
their group. Prior to ranking, we use three methods value is 30 or greater, we use the approximate
of identifying hospitals that were performance binomial confidence interval methodology.
outliers. These hospitals are not eligible to be When a hospital’s observed value is less
named winners. than 30, we use the exact mid-p binomial
confidence interval methodology. If the
Interquartile range methodology hospital’s low confidence interval index value
We use the interquartile range methodology to is greater than or equal to 1.0, the hospital is
identify hospitals with extreme outlier values for statistically worse than expected with
the following measures: 99% confidence.
–– Case mix- and wage-adjusted inpatient 2. We calculate the 75th percentile index value
expense per discharge (high or low outliers) for mortality and complications, including
–– Adjusted operating profit margin (high and data only for hospitals that meet condition 1.
low outliers) These values are used as the high trim points
for those hospitals. Hospitals with mortality
This is done to avoid the possibility of hospitals or complications index values above the
with a high probability of having erroneous cost respective trim points are winner-excluded.
report data being declared winners.
Hospitals with a negative operating profit margin
For more information on the interquartile range We identify hospitals with a negative adjusted
methodology, see Appendix C. operating profit margin as outliers. This is done
because we do not want hospitals that fail to
Mortality and complications outliers meet this basic financial responsibility to be
For mortality and complications, which have declared winners.
observed and expected values, we identify
hospitals with performance that is statistically Ranking
worse than expected. Hospitals that are worse than Within the five hospital comparison groups, we
expected are excluded from consideration when rank hospitals on the basis of their performance on
we select the study winners. This is done because each of the performance measures independently,
we do not want hospitals that have poor clinical relative to other hospitals in their comparison
outcomes to be declared winners. group. Each performance measure is assigned a
weight for use in overall ranking (see table below).
Each hospital’s weighted performance measure
ranks are summed to arrive at a total score for the
hospital. The hospitals are then ranked based on
their total scores, and the hospitals with the best
overall rankings in each comparison group are
selected as the winners.
* HAI metrics are not ranked for small community hospitals. For this comparison group only, 2017 weights for inpatient mortality,
complications, 30-day mortality, and 30-day readmission ranks were increased to 1.25 to balance quality and operational group weights.
49
Appendix A
Ohio 8 15
Oklahoma 3 3
Oregon 0 1
Distribution of winners by state and region Pennsylvania 5 6
Rhode Island 1 0
Winners by state South Carolina 1 2
State Number of winners South Dakota 0 0
Current study Previous study Tennessee 1 1
Alabama 0 0 Texas 9 9
Alaska 0 0 Utah 10 3
Arizona 2 3 Vermont 0 0
Arkansas 1 0 Virginia 1 2
California 7 6 Washington 0 0
Colorado 5 6 West Virginia 0 0
Connecticut 1 0 Wisconsin 4 4
Delaware 1 0 Wyoming 0 0
District of Columbia 0 0
Florida 9 6
Winners by region
Georgia 2 0
US Census region Number of winners
Hawaii 0 0
Current study Previous study
Idaho 1 3
Northeast 8 6
Illinois 6 8
Midwest 38 43
Indiana 7 5
South 29 28
Iowa 2 2
West 25 23
Kansas 0 2
Kentucky 0 0
Louisiana 0 3
Maine 0 0
Maryland 1 0*
Massachusetts 0 0
Michigan 6 4
Minnesota 4 1
Mississippi 0 0
Missouri 1 2
Montana 0 1
Nebraska 0 0
Nevada 0 0
New Hampshire 0 0
New Jersey 1 0
New Mexico 0 0
New York 0 0
North Carolina 0 2
North Dakota 0 0
* Maryland hospitals were winner-excluded due to missing Medicare spend per beneficiary (MSPB) measure.
51
Appendix B
States included in each US Census region
US census regions
Northeast Midwest South West
Connecticut Illinois Alabama Alaska
Maine Indiana Arkansas Arizona
Massachusetts Iowa Delaware California
New Hampshire Kansas District of Columbia Colorado
New Jersey Michigan Florida Hawaii
New York Minnesota Georgia Idaho
Pennsylvania Missouri Kentucky Montana
Rhode Island Nebraska Louisiana Nevada
Vermont North Dakota Maryland New Mexico
Ohio Mississippi Oregon
South Dakota North Carolina Utah
Wisconsin Oklahoma Washington
South Carolina Wyoming
Tennessee
Texas
Virginia
West Virginia
53
Appendix C: Normative database development
Watson Health constructed a normative database
55
In addition to considering the POA indicator To correct for this bias, we adjusted MEDPAR record
codes in calibration of our risk- and severity- processing through our mortality, complications,
adjustment models, we have adjusted for missing/ and LOS models as follows:
invalid POA coding found in the Medicare Provider
1. Original, valid (Y, N, U, W, or 1) POA codes
Analysis and Review (MEDPAR) data files. After
assigned to diagnoses were retained
2010, we have observed a significantly higher
percentage of principal diagnosis and secondary 2. Where a POA code of “0” appeared, we took
diagnosis codes that do not have a valid POA the next four steps:
indicator code in the MEDPAR data files. Since
a. We treated all diagnosis codes on the
2011, an invalid code of “0” has been appearing.
CMS exempt list as “exempt,” regardless of
This phenomenon has led to an artificial rise in
POA coding
the number of conditions that appear to be
occurring during the hospital stay, as invalid POA b. We treated all principal diagnoses as
codes are treated as “not present” by POA-enabled “present on admission”
risk models.
c. We treated secondary diagnoses where the
POA code “Y” or “W” appeared more than
50% of the time in Watson Health’s all-
payer database, as “present on admission”
d. All others were treated as “not present”
Percentage of diagnosis codes with POA indicator code of “0” by MEDPAR year
2010 2011 2012 2013 2014 2015 2016 2017
Principal diagnosis 0.00% 4.26% 4.68% 4.37% 3.40% 4.99% 2.45% 3.96%
Secondary diagnosis 0.00% 15.05% 19.74% 22.10% 21.58% 23.36% 21.64% 24.11%
57
Excluding records that are DNR status at admission Expected complications rate index models
is supported by the literature. A recent peer- Watson Health has developed a complications
reviewed publication stated: “Inclusion of DNR risk model that can be applied to coded patient
patients within mortality studies likely skews those claims data to estimate the expected probability
analyses, falsely indicating failed resuscitative of a complication occurring, given various
efforts rather than humane decisions to limit care patient-related factors. We exclude long-term
after injury”38. Our rationale is straightforward: If care, psychiatric, substance abuse, rehabilitation,
a patient is admitted DNR (POA), then typically no and federally owned or controlled facilities. In
heroic efforts would be made to save that patient if addition, we exclude certain patient records
they began to fail. Without the POA DNR exclusion, from the data set: psychiatric; substance abuse;
if a given hospital has a higher proportion of POA unclassified cases (MS-DRGs 945, 946, and
DNR patients that it is not attempting to save from 999); cases in which patient age was less than 65
death compared to an otherwise similar hospital years; and cases in which a patient transferred to
that is not admitting as high a proportion of such another short-term, acute care hospital. Palliative
patients, the first hospital would look lower- care patients (Z515; V66.7) are included in the
performing compared to the second through no complications risk model, which is calibrated to
fault of its own. The difference would be driven by estimate probability of complications for
the proportion of POA DNR patients. these patients.
A standard logistic regression model is used to Note: We are no longer able to exclude all
estimate the risk of mortality for each patient. This rehabilitation patients as we have done in the past.
is done by weighting the patient records of the This is because the ICD-10-CM coding system does
hospital by the logistic regression coefficients not identify rehabilitation patients. We can only
associated with the corresponding terms in the exclude those patients coded as being in a PPS-
model and the intercept term. This produces exempt hospital rehabilitation unit (provtype =
the expected probability of an outcome for R or T).
each eligible patient (numerator) based on the
experience of the norm for patients with similar Risk-adjusted complications refer to outcomes that
characteristics (for example, age, clinical grouping, may be of concern when they occur at a greater-
and severity of illness)32–36. This model accounts than-expected rate among groups of patients,
for only patient conditions that are present on possibly reflecting systemic quality-of-care
admission when calculating risk. Additionally, in issues. The Watson Health complications model
response to the transition to ICD-10-CM, diagnosis uses clinical qualifiers to identify complications
and procedure codes, and the interactions among that have occurred in the inpatient setting. The
them, have been mapped to AHRQ CCS categories complications used in the model are listed on the
for assignment of risk instead of using the following page.
individual diagnosis, procedure, and interaction
effects. See discussion under the methods for
identifying patient severity above.
59
A standard regression model is used to estimate Examples:
the risk of experiencing a complication for
each patient. This is done by weighting the 10 events observed ÷ 10 events expected = 1.0:
patient records of the hospital by the regression The observed number of events is equal to the
coefficients associated with the corresponding expected number of events based on the
terms in the prediction models and intercept term. normative experience
This method produces the expected probability
of a complication for each patient based on the 10 events observed ÷ 5 events expected = 2.0:
experience of the norm for patients with similar The observed number of events is twice the
characteristics. After assigning the predicted expected number of events based on the
probability of a complication for each patient in normative experience
each risk group, it is then possible to aggregate the
patient-level data across a variety of groupings39–42, 10 events observed ÷ 25 events expected = 0.4:
including health system, hospital, service line, The observed number of events is 60% lower
or MS-DRG classification. This model accounts than the expected number of events based on the
for only patient conditions that are present on normative experience
admission when calculating risk. Additionally,
in response to the transition to ICD-10-CM, Therefore, an index value of 1.0 indicates no
diagnosis and procedure codes, and the difference between observed and expected
interactions among them, have been mapped to outcome occurrence. An index value greater than
AHRQ CCS categories for assignment of risk instead 1.0 indicates an excess in the observed number
of using the individual diagnosis, procedure, and of events relative to the expected based on the
interaction effects. normative experience. An index value of less than
1.0 indicates fewer events observed than would
Index interpretation be expected based on the normative experience.
An outcome index is a ratio of an observed An additional interpretation is that the difference
number of outcomes to an expected number of between 1.0 and the index is the percentage
outcomes in a population. This index is used to difference in the number of events relative to the
make normative comparisons and is standardized norm. In other words, an index of 1.05 indicates
in that the expected number of events is based 5% more outcomes, and an index of 0.90 indicates
on the occurrence of the event in a normative 10% fewer outcomes than expected based on
population. The normative population used to the experience of the norm. The index can be
calculate expected numbers of events is selected calculated across a variety of groupings (for
to be similar to the comparison population with example, hospital or service line).
respect to relevant characteristics, including age,
sex, region, and case mix.
hospital leadership to affect change. Major teaching HAI-1, HAI-2, HAI-3, HAI-4, 4
HAI-5, HAI-6
Teaching HAI-1, HAI-2, HAI-3, HAI-5, 4
HAI measures HAI-6
HAI-1 CLABSI in ICUs and select wards
Large community HAI-1, HAI-2, HAI-3, HAI-5, 4
HAI-2 CAUTI in intensive care units (ICUs) and select wards HAI-6
HAI-3 Surgical site infection (SSI): colon Medium community HAI-1, HAI-2, HAI-6 1
HAI-4 Surgical site infection from abdominal hysterectomy Small community Not ranked n/a
(SSI: hysterectomy)
HAI-5 Methicillin-resistant staphylococcus aureus (MRSA) blood
laboratory-identified events (bloodstream infections) In addition to the SIR values for each HAI, CMS
HAI-6 C.diff laboratory-identified events (intestinal infections) publishes the observed and expected values, as
well as a population count (days or procedures),
which varies by measure**. We normalize the
The HAI measures are reported as risk-
individual hospital expected values for each HAI
adjusted standardized infection ratios (SIRs)
by multiplying them by the ratio of the observed to
using probability models and normative data
expected values for their comparison group for
sets maintained by a branch of the Centers for
that HAI.
Disease Control and Prevention (CDC), the NHSN.
Along with reporting SIR data to CMS, NHSN is
We calculate a normalized z-score for each HAI,
responsible for administering HAI surveillance
for each hospital, using the observed, normalized
procedures and reporting specifications, along with
expected and count. We did not calculate a z-score
producing software and training programs for all
for an individual HAI if CMS did not report a SIR
participating hospitals. Its underlying methodology
value for that measure in the Hospital Compare
details for building the SIR are documented and
data set.
updated annually in a reference guide posted at the
CDC website43.
61
Data note relating to the To develop a composite HAI measure, we believe it
July 2016 Hospital Compare is not appropriate to simply “roll up” observed and
performance period expected values across the different HAIs because
(July 1, 2012 - June 30, the overall observed to expected ratio would be
2015): weighted by the rates for each HAI, which could be
quite different, and the HAIs are also likely to be
The pneumonia measure distributed differently from hospital to hospital. For
cohort was expanded to these reasons, we calculate an unweighted mean
include principal discharge of the normalized z-scores as the composite HAI
codes for sepsis and measure used for ranking hospitals.
aspiration pneumonia. This
resulted in a significant For reporting, we calculate an unweighted mean
increase in pneumonia 30- of the CMS SIRs for each hospital. If no value was
day mortality rates nationally, available for a measure, the composite measure
beginning with the 2015 data represents the mean of available measures, as long
year. as the hospital had the minimum required number
of HAIs for its comparison group. For each HAI, the
SIR can be viewed as a unitless measure that is
essentially a percent difference; that is, observed
to expected ratio minus 1 x 100 = percent
difference, which is unbiased by differences in the
rates by HAI or distributions of HAIs by hospital. It
is methodologically appropriate to ask: What is the
average (mean) percent difference between my
observed rates of HAIs and the expected rates of
those HAIs?
63
POA coding allows us to estimate appropriate Inpatient expense per discharge and operating
adjustments to LOS weights based on pre-existing profit margin measure calculations
conditions. Complications that occurred during the For this study, we used hospital-reported data
hospital stay are not considered in the model. We from 2017 Medicare cost reports available in the
calculate expected values from model coefficients Hospital Cost Report Information System 2018
that are normalized to the clinical group and third-quarter data file to calculate the inpatient
transformed from log scale. expense per discharge and operating profit margin
measures. Below you will find our calculations
and the cost report locations (worksheet, line, and
Emergency department throughput measure column) of data elements for these measures. The
We have included two emergency department line and column references are the standard based
(ED) throughput measures from the CMS Hospital on CMS Form 2552-10.
Compare data set. The hospital ED is an access
point to healthcare for many people. A key factor in Case mix- and wage-adjusted inpatient expense
evaluating ED performance is process “throughput,” per discharge
measures of timeliness with which patients are [((0.62 × acute inpatient expense ÷ CMS wage index)
seen by a provider, receive treatment, and either + 0.38 × acute inpatient expense)
are admitted or discharged. Timely ED processes ÷ acute inpatient discharges]
may impact both care quality and the quality of the ÷ Medicare case mix index
patient experience. We chose to include measures
that define two ED processes: median time from ED acute inpatient expense = inpatient expense −
arrival to ED departure for admitted patients, and subprovider expense − nursery expense − skilled
median time from ED arrival to ED departure for nursing facility expense − intermediate-care
non-admitted patients. facility expense − other long-term care facility
expense − cost centers without revenue (for
For this study’s measure, we used 2017 data from example, organ procurement, outpatient therapy,
CMS Hospital Compare. Hospitals are required and other capital-related costs)
to have reported both ED measures or they are
excluded from the study. Our ranked metric is the inpatient expense = sum over all departments
calculated mean of the two included measures. [(inpatient department charges
÷ department charges) × department cost]
Hospitals participating in the CMS Inpatient
Quality Reporting and Outpatient Quality Reporting Individual element locations in the Medicare
Programs report data for any eligible adult ED Cost Report:
patients, including Medicare patients, Medicare
–– Acute inpatient discharges — worksheet S-3,
managed care patients, and non-Medicare patients.
line 14, column 15
Submitted data can be for all eligible patients or a
sample of patients, following CMS sampling rules. –– Inpatient department (cost center) elements
–– Fully allocated cost — worksheet C, part 1,
ED throughput measures
column 1; if missing, use worksheet B, part 1,
ED-1b Average time patients spent in the ED before they were
admitted to the hospital as an inpatient column 26
OP-18b Average time patients spent in the ED before being ––Total charges — worksheet C, part 1,
sent home
column 8
–– Inpatient charges — worksheet C, part 1,
column 6
–– Other income from investments — worksheet –– Encourage public reporting of the survey
G-3, line 7, column 1 results to create incentives for hospitals to
improve quality of care
––Total operating expense — worksheet G-3, line
4, column 1 –– Enhance public accountability in healthcare
by increasing the transparency of the quality
–– Related organization expense — worksheet A-8, of hospital care provided in return for the
line 12, column 2 public investment
Note: When a hospital has already reported the net The HCAHPS survey has been endorsed by the
related organization expense in its total operating NQF and the Hospital Quality Alliance. The federal
expense, we subtract it back out to avoid double- government’s Office of Management and Budget
counting. This issue is identified on worksheet G-2 has approved the national implementation of
expense additions, lines 30 through 35 (including HCAHPS for public reporting purposes.
sublines) where titles contain references to “home
office,” “related organization,” “shared services,” Voluntary collection of HCAHPS data for public
“system assessment,” “corporate allocation,” or reporting began in October 2006. The first public
“internal allocation.” reporting of HCAHPS results, which encompassed
eligible discharges from October 2006 through
65
June 2007, occurred in March 2008. HCAHPS Performance measure normalization
results are posted on the Hospital Compare The inpatient mortality, complications, and LOS
website, found at medicare.gov/hospitalcompare. measures are normalized based on the in-study
A downloadable version of HCAHPS results population, by comparison group, to provide
is available. a more easily interpreted comparison among
hospitals. To address the impact of bed size and
The HCAHPS data is adjusted by CMS for both teaching status, including extent of residency
survey mode (phone, web, or mail survey) and program involvement, and compare hospitals to
the patient mix at the discharging facility, since other like hospitals, we assign each hospital in
respondents randomized to the phone mode tend the study to one of five comparison groups (major
to provide more positive evaluations about their teaching, teaching, large community, medium
care experience than those randomized to the community, and small community hospitals).
mail survey mode. Details on this adjustment’s Detailed descriptions of the hospital comparison
parameters are available for all facilities with each groups can be found in the Methodology section of
quarterly update, at hcahpsonline.org. the 100 Top Hospitals study.
Although we report hospital performance on all For the mortality and complications measures,
HCAHPS questions, only performance on the we base our ranking on the difference between
overall hospital rating question, “How do patients observed and expected events, expressed in
rate the hospital, overall?” is used to rank hospital standard deviation units (z-scores) that have
performance. Patient responses fall into three been normalized. We normalize the individual
categories, and the number of patients in each hospital expected values by multiplying them by
category is reported as a percent: the ratio of the observed to expected values for
–– Patients who gave a rating of 6 or lower (low) their comparison group. We then calculate the
normalized z-score based on the observed and
–– Patients who gave a rating of 7 or 8 (medium) normalized expected values and the patient count.
–– Patients who gave a rating of 9 or 10 (high)
For the HAI measures, we base our ranking on
For each answer category, we assign a weight as the unweighted mean of the normalized z-scores
follows: 3 equals high or good performance, 2 for the included HAIs. Included HAIs vary by
equals medium or average performance, and 1 comparison group. See page 61 for details. We
equals low or poor performance. We then calculate normalize the individual hospital expected values
a weighted score for each hospital by multiplying for each HAI by multiplying them by the ratio of the
the HCAHPS answer percent by the category observed to expected values for their comparison
weight. For each hospital, we sum the weighted group for that HAI. We calculated a normalized
percent values for the three answer categories. z-score for each HAI, for each hospital, using the
Hospitals are then ranked by this weighted percent observed, normalized expected and count.
sum. The highest possible HCAHPS score is 300
(100% of patients rate the hospital high). The For the LOS measure, we base our ranking on the
lowest possible HCAHPS score is 100 (100% of normalized, severity-adjusted LOS index expressed
patients rate the hospital low). in days. This index is the ratio of the observed and
the normalized expected values for each hospital.
We normalize the individual hospital’s expected
values by multiplying them by the ratio of the
observed to expected values for its comparison
group. The hospital’s normalized index is then
calculated by dividing the hospital’s observed value
67
Why we have not calculated percent change in
specific instances
Percent change is a meaningless statistic when
the underlying quantity can be positive, negative,
or zero. The actual change may mean something,
but dividing it by a number that may be zero or of
the opposite sign does not convey any meaningful
information because the amount of change is not
proportional to its previous value.
33. DesHarnais SI, et al. The Risk- 42. Weingart SN, et al. Use of
Adjusted Mortality Index: A New Administrative Data to Find Substandard
Measure of Hospital Performance. Care: Validation of the Complications
Medical Care. 26, no. 12 (December Screening Program. Med Care. 38, no. 8
1988): 1129-1148. (Aug 2000): 796-806.
71
About IBM Watson Health™ IBM, the IBM logo, ibm.com, and
Each day, professionals make powerful Watson Health are trademarks of
progress toward a healthier future. At International Business Machines
IBM Watson Health, we help remove Corp., registered in many jurisdictions
obstacles, optimize their efforts, and worldwide. Other product and service
For more information reveal powerful new insights so they names might be trademarks of IBM or
can transform health for the people other companies.
Visit 100tophospitals.com, call they serve. Working across the
800-525-9083 option 4, or send an landscape, from payers and providers to A current list of IBM trademarks is
government and life sciences, we bring available on the web at “Copyright and
email to 100tophospitals@us.ibm.com. together deep health expertise, proven trademark information” at:
innovation, and the power of artificial ibm.com/legal/copytrade.shtml.
intelligence to enable our clients
to uncover, connect, and act on the This document is current as of the initial
insights that advance their work — and date of publication and may be changed
change the world. by IBM at any time. Not all offerings
are available in every country in which
© Copyright IBM Corporation 2019 IBM operates.
IBM Corporation
Software Group The information in this document is
Route 100 provided “as is” without any warranty,
Somers, NY 10589 express or implied, including without
ibm.com/watsonhealth any warranties of merchantability,
800-525-9083 fitness for a particular purpose and
any warranty or condition of non-
Produced in the United States of infringement.
America February 2019
IBM products are warranted
according to the terms and conditions
of the agreements under which they
are provided.
TOP 0219