Sunteți pe pagina 1din 10

Chapter 1: General Principles

1.4: Quality of Care: Measurement and Improvement


Larry Allen, MD, FACC Consulting Fees/Honoraria: Ortho-McNeil Janssen Scientific Affairs, Robert Wood Johnson Foundation, Amgen; Research Grants: American Heart Association John S. Rumsfeld, MD, PhD, FACC Consulting Fees/Honoraria: United Healthcare
Learner Objectives Upon completion of this module, the reader will be able to: 1. Define quality of care and describe the major domains of high quality health care. 2. Explain the features of quality initiatives that are relevant to practicing clinicians, including delivery of evidence-based medicine, public reporting and reimbursement based on quality metrics, and maintenance of certification and licensure. 3. Compare and contrast the tools of quality, including clinical data standards, clinical practice guidelines, quality metrics, performance measures, and appropriate use criteria (AUC). 4. Describe the health care system features that are necessary to achieve high quality care, including measurement, feedback, system changes, engaged clinicians, and administrative support.

1
Introduction
Clinicians may perceive the topic of measuring and improving quality of care as one that is largely under the purview of administrators and health policy makers. Instead, clinicians should be highly motivated to understand how to measure and improve quality of care for a variety of reasons, including: Quality of care is evidence-based medicine. Quality measurement and improvement initiatives help clinicians to stay current with the best available evidence for therapeutics and care delivery, thereby supporting the practice of evidence-based medicine. Quality of care is central to lifelong learning, certification, and licensure. The maintenance of certifications, as well as state licensure activities, is increasingly centered on quality measurement and improvement. This may include the demonstration of practice improvement. Quality of care is at the center of health care reform. Consumer groups, hospitals, health care systems, payers, states, the federal government, and other stakeholders are heavily focused on quality of care. There is a particular focus on unexplained variation in care delivery. This variation is viewed as a marker of variation in quality, as a major contributor to health care expenditures, and as a target for health care reform. Quality of care is increasingly about accountability. Public reporting and performance-based reimbursement are increasingly based on quality measures for both processes of care and outcomes of care. Quality of care drives health care system improvement. Quality measurement and improvement initiatives inform changes to the health care system at the practice, hospital, and national levels. This serves to optimize health care delivery and thus the best care possible for patients.

Why Are There Concerns About Quality?


The rapid growth of health care spending in the United States has focused increased attention on the quality of care that results from this large commitment of resources. Unfortunately, when the quality of health care in the United States is measured, significant deficiencies are found. Moreover, there is a lack of correlation between higher expenditures and higher quality of care.1 Poor quality results from a variety of deficiencies in any one of the properties of high-quality health care, including: unsafe practices, use of ineffective therapies, application of the wrong therapy to the wrong patient, delayed delivery of care, use of resource intensive care for marginal benefit, and differential health care delivery strictly based on age, gender, race, or ethnicity. Deficits in the quality of health care are also framed as deriving from three types of shortcomings, each of which may constitute a form of inefficiency.2 Overuse occurs when a service is provided that may not be necessary or may expose the patient to greater potential harm than benefit (i.e., when it is not warranted on medical grounds). Underuse occurs when a service with a favorable benefitrisk ratio is not provided. Misuse includes incorrect diagnoses as well as medical errors and other sources of avoidable complications.

One of the most compelling arguments implicating the efficiency of health care in the United States derives from the marked geographic variation in per capita health care spending, without
1.4: Quality of Care: Measurement and Improvement 1.4.1

obvious correlation to measures of health care quality or patient outcomes. The current substantial growth in the performance of cardiovascular testing and procedures has been characterized by increasing regional differences, as documented among Medicare beneficiaries in the Dartmouth Atlas of Cardiovascular Health Care.3 Yet, those regional differences in use do not appear to translate to significant differences in the performance of well-accepted standards of care or the health of those communities.4 Furthermore, the Institute of Medicine (IOM) and others have issued several reports documenting the extent of medical errors and their consequences.5,6 Clinicians must recognize that their actions, both in terms of errors of omission (i.e., not doing things they should) and errors of commission (i.e., doing things they should not), are under increasing scrutiny.

Process refers to the way in which care is delivered. The ideal process is to do the right thing for the right patient at the right time. Processes refer to the actions performed in the delivery of patient care, including the timing and technical competency of their delivery. Process of care measures often focus on patient selection and administration of therapies (e.g., prescription of aspirin for patients with acute myocardial infarction). Outcomes refer to the results of care. These are measures of the end-results of health care delivery. From the patient, clinician, and societal perspectives, primary outcomes concepts are reflected in just two questions. The first question is related to mortality versus survival: Did the care/therapy delivered help patients live longer? The second question is related to morbidity versus quality of life: Did the care/therapy improve patient health status and/or make patients feel better? For a variety of reasons, outcomes measures have focused largely on survival, which is objective and easy to obtain, or on surrogate measures (e.g., blood pressure). However, there is a growing recognition of the importance of patient-centered outcomes, including patient health status or quality-of-life measurements such as angina burden (e.g., Seattle Angina Questionnaire). The Donabedian model proposes that each component has a direct influence on the next. In other words, the structural attributes of the system in which care occurs (i.e., resources and administration) dictate processes of care (i.e., delivery of therapeutics) which in turn affect the outcomes (i.e., goal achievement). Importantly, the patient is at the center, with the ultimate goal of improving outcomes that are important to patients and their families. It is also important to note that patients have different demographic and clinical profiles (e.g., comorbidities, disease severity). Thus, clinicians and hospitals care for different case-mixes of patients. As such, it is generally true that valid measures of patient outcomes, especially for comparison among hospitals or other groups, must be risk-adjusted (i.e., case-mix adjusted).

What Is High Quality Health Care?


The goal of health care is to help people live longer and better lives. Therefore, the extent to which health care delivery accomplishes this overall goal represents the quality of that care. The IOM report, Crossing the Quality Chasm: A New Health System for the 21st Century, defines quality as: the degree to which health care systems, services, and supplies for individuals and populations increase the likelihood for desired health outcomes in a manner consistent with current professional knowledge.7 The IOM further defined six domains of the highest quality health care, health care should be:7 Safe: Avoiding harm to patients from the care that is intended to help them Effective: Providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit (avoiding underuse and misuse, respectively) Patient-Centered: Providing care that is respectful of and responsive to individual patient preferences, needs, and values, and ensuring that patient values guide all clinical decisions Timely: Reducing waits and sometimes harmful delays for both those who receive care and those who give care Efficient: Avoiding waste, including the waste of equipment, supplies, ideas, and energy Equitable: Providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status

What About Cost?


There is an increasing interest in assessing quality in relationship to resource use. Multiple studies have shown that higher costs of care and higher resource utilization do not translate into higher quality of care.2 Therefore, many current quality assessment and improvement efforts are focused on efficiency of care (i.e., cost per outcomes). A similar concept is that of value, which is the measurement of patient health outcomes, including the patient experience with care, achieved per dollar spent.9 It is the ratio that is critical. Costly interventions are not necessarily of low value, if they have significant benefit. Conversely, cheap interventions are not necessarily of high value, if they have minimal or no benefit.

How Should We Assess Quality?


The Donabedian model is frequently used to conceptualize quality assessment. It focuses on three domains: structure, process, and outcomes.8 Structure refers to the resources available to provide care. This typically includes such domains as personnel, equipment, facilities, laboratory systems, training, certification and protocols.
1.4.2 Chapter 1: General Principles

A Systems Problem, A Systems Solution


For most clinicians, day-to-day interest in quality would seem to focus on individual decisions as they relate to the delivery of cardiovascular care to individual patients. However, quality improvement cannot rest upon individual clinicians being asked to do more or do better.10 Instead, quality should be consid-

The Toolkit of Quality Improvement


Tool Evidence Denition / Purpose Data on associations between actions and outcomes; derived from a hierarchy of scienti c research: Unsystematic clinical observation Physiological experiments Expert opinion Case series Cross-sectional studies Case-control studies Retrospective observational cohorts Prospective observational cohorts Randomized controlled trials Agreed upon de nitions, nomenclature, and data elements; facilitate accurate communication and fair comparison Detailed summary of the body of evidence-based medicine for a given disease process or clinical content area; includes speci c recommendations for standards of care, graded on level (I, IIa, IIb, III) and type of evidence (A, B, C)

Data Standards Clinical Practice Guidelines

Process Discrete processes of care that imply that clinicians are in error if they do not care for Performance patients according to these clinical standards; must also allow for practical identi cation Measures of those patients for whom a speci c action should be taken (a clear denominator), easy determination of whether or not the measure has been performed (a clear numerator), and opportunities for timely feedback Appropriate Use Criteria Identify common, prototypical patient subgroups for which expert clinicians assess the bene ts and risks of a test or procedure on patient outcomes (score 1-9); primary goal is to reduce overuse, thereby improving safety and ef ciency Measures of health that are important to patients and are through to be affected by processes of care; generally require risk-standardization to account for case mix

Outcomes Measures

Table 1 The Toolkit of Quality Improvement

ered largely on a system level. As such, quality improvement is achieved through a systems approach that provides a supportive environment for the delivery of health care. Continuous quality improvement is an organized, scientific process for evaluating, planning, improving, and controlling quality. The following sections describe the pieces of continuous quality improvement and how they fit together to promote optimal care delivery both at the national and local levels.

The Tools of Quality Assessment


The success of achieving ideal quality in health care delivery requires that a quality infrastructure be in place. This quality infrastructure consists of clinical evidence, standardized definitions, clinical guidelines, performance measures and other quality metrics, and AUC (Table 1). The goal of these tools is to promote the optimal use of evidence-based medicine in care delivery, thereby maximizing efficiency by promoting diagnostic and therapeutic strategies with the highest value to patients.

Evidence-based medicine involves two fundamental principles.11 First, a hierarchy of evidence exists from which to guide clinical decision making. While individual clinical observations can generate important hypotheses, unsystematic clinical observations are limited by sample size and deficiencies in the ability to make accurate causal inferences. Thus, only systematic approaches to data collection and analysis are generally considered as evidence to guide clinical decisions. These systematic approaches, listed in increasing order of strength of evidence for informing clinical decision making, include: physiological experiments, case series, cross-sectional studies, case-control studies, retrospective observational cohorts, prospective observational cohorts, and randomized clinical trials. Stronger study designs minimize bias and improve power, leading to improved evidence to support clinical decision making. It is important to note that this evidence hierarchy is not absolute. For example, randomized clinical trials can suffer from studying only highly selected patients, and thus may have limited generalizability. Similarly, observational studies must be cautious of unmeasured factors that can confound the interpretation of attribution, yet may give a broader assessment of care and outcomes in routine clinical practice. Thus, a synthesis of all
1.4: Quality of Care: Measurement and Improvement 1.4.3

The Evidence
The determination of care quality is grounded in clinical evidence.

Levels of Evidence for Clinical Practice Guidelines


Size of Treatment Effect
Benet >>> Risk Procedure/Treatment Should be performed/administered

ClassI

Estimate of Certainty (Precision) of Treatment Effect

Benet >> Risk Additional studies with focused objectives needed It is reasonable to perform procedure/ administer treatment

Class IIa

Class IIb

Benet Risk Additional studies with broad objectives needed; additional registry data would be helpful Procedure/Treatment may be considered

Class III No Bene t or Class III Harm


Procedure/ test COR III: Not No Benet Helpful COR III: Harm Treatment No Proven Bene t

Excess Cost w/o Harmful to Bene t or Harmful Patients

Level A

Multiple populations evaluated* Data derived from multiple randomized clinical trials or meta-analyses

Recommendation that procedure or treatment is useful/effective

Recommendation in Recommendations Recommendation that favor of treatment or usefulness/ef cacy procedure or treatment procedure being useful/ less well established is not useful/effective effective and may be harmful Suf cient evidence from Greater con icting multiple randomized evidence from multiple Suf cient evidence from Some con icting trials or meta-analyses randomized trials or evidence from multiple multiple randomized meta-analyses randomized trials or trials or meta-analyses meta-analyses Recommendation that procedure or treatment is useful/effective Evidence from single randomized trial or nonrandomized studies Recommendation in Recommendations Recommendation that favor of treatment or usefulness/ef cacy procedure or treatment procedure being useful/ less well established is not useful/effective effective and may be har mful Greater con icting evidence from single Some con icting Evidence from single randomized trial or evidence from single randomized trial or nonrandomized studies randomized trial or nonnonrandomized studies randomized studies Recommendation in Recommendations favor of treatment or usefulness/ef cacy procedure being useful/ less well established effective Only diverging expert Only diverging expert opinion, case studies, opinion, case studies, or standard of care or standard of care Recommendation that procedure or treatment is not useful/effective and may be harmful Only expert opinion, case studies, or standard of care

Level B

Limited populations evaluated* Data derived from a single randomized trial or nonrandomized studies

Level C

Recommendation that procedure or treatment Very limited populations is useful/effective evaluated* Only consensus opinion Only expert opinion, of experts, case studies, case studies, or or standard of care standard of care

Figure 1 Levels of Evidence for Clinical Practice Guidelines


Reproduced with permission from The Evidence-Based Medicine Working Group. Users Guide to the Medical Literature: A Manual for Evidence-Based Clinical Practice. 2nd ed. Chicago: American Medical Association Press.

available evidence, such as in a systematic review and/or evidence-based clinical practice guideline, may enhance the assessment of benefits and risks of a given therapy. A systematic review will provide guidance for care decisions above and beyond any single study. The second fundamental principle is that evidence alone will never be sufficient to make a clinical decision. Decision makers must always integrate and trade the benefits, risks, inconveniences, and costs associated with alternate management strategies. This should be done within the context of patients goals, values, and preferences.

in association with the American Heart Association (AHA), has implemented Clinical Data Standards, the lexicon needed to achieve commonality and consistency in definitions in many areas of cardiovascular disease.12 Standardized definitions allow accurate comparisons between multiple relevant clinical trials as well as clinical outcomes collected through clinical registry programs.13

Clinical Practice Guidelines


The creation of clinical guidelines is intended to summarize the body of evidence-based medicine for a given disease process or clinical content area. Preferably these guidelines are based upon multiple large, randomized controlled trials. When substantial randomized clinical trial data are lacking, smaller clinical trials, carefully performed observational analyses, or even expert consensus opinion is utilized as the weight of evidence for a particular clinical guideline. Over the past 25 years, the ACC and the AHA have published multiple cardiovascular clinical guidelines covering many relevant areas of cardiology, with continued updating as clinical advances dictate. These include

Data Standards
Standardized sets of definitions, nomenclature, and data elements facilitate accurate communication and fair comparison. They help avoid the Tower of Babel syndrome with the inability to accurately compare clinical trials and other outcomes assessments due to differing definitions of clinical status and adverse outcomes. The American College of Cardiology (ACC),

1.4.4

Chapter 1: General Principles

acute myocardial infarction, unstable angina, chronic stable angina, coronary revascularization, heart failure, supraventricular arrhythmias, atrial fibrillation, implantation of pacemakers, and antiarrhythmia devices.14 These practice guidelines are intended to assist health care providers in clinical decision making through describing generally acceptable approaches for the diagnosis, management, or prevention of disease states.15-17 Figure 1provides a framework for evaluating various procedures and treatments. The framework includes both levels of evidence and types of evidence.16 Levels of Evidence: Recommendations are given one of the following indication classifications based on the evaluation of evidence by a panel of guidelines experts. Class I: procedure or treatment should be performed or administered; the benefittorisk ratio is favorable. Class IIa: it is reasonable to perform the procedure or treatment; benefit to risk ratio is probably favorable. Class IIb: procedure of treatment may be considered; benefit to risk ratio is unknown. Class III: the procedure or treatment should not be performed; no benefit or riskoutweighs benefit.

ity in routine practice; 2) practical identification of those patients for whom a specific action should be taken (a clear denominator); 3) easy determination of whether or not the measure has been performed (a clear numerator); 4) adherence to the measure results in meaningful improvements in clinically meaningful outcomes; and 5) opportunities for timely feedback to clinicians and institutions to promote continuous quality improvement.19 Process Performance Measures Process performance measures are distilled from clinical guideline therapeutic recommendations, generally capturing those Class I or Class III, Level of Evidence A recommendations for which the evidence is particularly strong. Process performance measures describe discrete processes of care that are explicit diagnostic or therapeutic actions to be performed or not performed (e.g., the provision of aspirin for acute myocardial infarction). The implication is that clinicians are in error if they do not follow these care processes or do not document specific reasons for disregarding these recommendations. Outcome Performance Measures Outcome performance measures are being increasingly used as performance measures. Adding outcomes measures to process measures has important benefits. For example, process measures, even when reported together, capture a small fraction of the care delivered; in contrast, outcomes measures, such as mortality or health-related quality of life, should integrate the totality of care that a patient receives.20 The government website, Hospital Compare, reports 30-day risk-standardized mortality and rehospitalization rates for fee-for-service Medicare beneficiaries after hospitalization for heart failure, acute myocardial infarction, or pneumonia.21 These statistics are used for reimbursement purposes. Critiques of outcomes measures include, for example, that the methods for risk-standardization are not sufficiently fair to account for important differences in case mix. Also, outcomes measures do not tell clinicians and institutions specifically what they are doing correctly or incorrectly. Therefore, risk-standardized outcomes measures should be combined with detailed measures of structure and process performance, thereby providing clinicians and institutions with audit and feedback on their overall performance alongside data highlighting those areas in particular need of quality improvement activities. Composite Measures Composite measures have been constructed and deployed to address the proliferation of performance measures and the need to ensure that these measures comprehensively represent health care quality.22 Composite measures utilize data reduction in order to simplify presentation and interpretation. They also promote scope expansion to better integrate multiple metrics into a more comprehensive assessment of provider performance. However, these advantages come at a cost. Standard psychometric properties of composites can be more complex to determine, methods for scoring (e.g., all-or-none vs. any vs. weighting) can lead to different conclusions, and problems with missing data

Types of Evidence: The weight of evidence to support a given recommendation is listed as A, B, or C. The highest level of evidence, A, implies data derived from multiple randomized trials, while the lowest level of evidence, C, reflects the consensus opinion of experts, case studies, or standard of care. Although we would like guidelines to be based on the highest level of evidence in the hierarchy, multiple factors (e.g., the difficulty of conducting large randomized trials) limit the extent to which the wide array of clinical decisions can be strongly recommended. Of the 16 ACC/AHA clinical practice guidelines that report levels of evidence in September 2008, only 11% of 2,711 recommendations were classified as Level of Evidence A, whereas 46% were level C.17

Performance Measures
Performance, or care accountability, measures are those process, structure, efficiency, and outcome measures that have been developed using ACC/AHA methodology. This includes the process of public comment and peer review and the specific designation as a performance measure by the ACC/AHA Task Force on Performance Measures.18,19 This may occur in collaboration with other national practice organizations and federal agencies, such as the National Quality Forum (NQF), Centers for Medicare and Medicaid Services (CMS), or the Joint Commission on Accreditation of Health Care Organizations. Performance measures must have a number of qualities that allow them to be used for both continuous quality improvement as well as accountability and reimbursement, including: 1) face valid-

1.4: Quality of Care: Measurement and Improvement

1.4.5

can be amplified.

Quality Metrics
Quality metrics are those measures that have been developed to support self-assessment and quality improvement at the provider, hospital, and/or health care system level.18 These metrics are often a major focus of clinical registry and quality improvement programs.13 These metrics may not have been formally developed using the ACC/AHA performance measure methodology. However, they may be identified as preliminary, candidate, test, evolving, or quality measures, which indicates that they may be worthy of consideration for further development into performance measures. Quality metrics may not meet all specifications of formal performance measures used in public reporting and accountability, but can still represent valuable tools to aid clinicians and hospitals in improving quality of care and enhancing patient outcomes.
2
NIH Roadmap

The Cycle of Quality


3
Data Standards

4
Network information

1
FDA Critical Path

Early translational steps

5 6
Priorities and processes

Empirical ethics

Discovery science

Outcomes

Measurement and Education

Clinical trials

7
Inclusiveness

8
Use for feedback on priorities

12
Transparency to consumers

11
Pay-forperformance

Performance measures

Clinical practice guidelines

10
Evaluation of speed and uency

9
Con ict-of-interest management

Appropriate Use Criteria


Figure 2 AUC are intended to be a supplement to The Cycle of Quality clinical practice guidelines and performance measures, and differ from them in important A cycle of specific efforts is needed to create systematic approaches to translating knowledge ways. AUC identify common, prototypical across the continuum from discovery science to public health intervention. The cycle begins with the discovery of fundamental biological, physical, and social constructs. Once a discovery is patient subgroups for which expert clinicians, made, it undergoes a development cycle including extensive preclinical applied research before using available evidence from the medical it can be developed as a treatment with plausible human benefit. Evidence is then gathered literature and clinical practice, assess the in human experiments, and assessments are made about the interventions value; these benefits and risks of a test or procedure on evaluations continue after the treatment is clinically available. What is learned through the cycle patient outcomes. AUC are scored as follows: is often fed back to refine the science of discovery. At the clinicians level, measurement and a score of 7-9 means appropriate, 4-6 means education are central to completing the cycle. 23 uncertain, and 1-3 means inappropriate. IdeReproduced with permission from Califf RM, Harrington RA, Madre LK, Peterson ED, Roth D, ally, AUC define what to do, when to do, Schulman KA. Curbing the cardiovascular disease epidemic: aligning industry, government, and how often to do a certain modality or payers, and academics. Health Aff (Millwood) 2007;26:62-74. procedure, with consideration for local care environments and patient goals, preferences, and value. AUC should ideally be simple, reliable, valid, and transparent. AUC offer a for various common clinical scenarios. Additionally, a complete framework from which to examine the ratioevaluation of appropriateness might also include a comparinale of diagnostic and therapeutic actions to support a more son of the relative marginal cost and benefits of each imaging efficient use of medical resources. The primary goals of AUC modality. Regrettably, there is currently insufficient evidence to are to identify overuse and in so doing, improve the safety and make such evaluations across a broad spectrum of potential cost-effectiveness of care. clinical indications for diagnostic and procedural decisions. The ACC, in partnership with relevant specialty and subspecialty societies, has been developing an increasing portfolio of AUC Quality Improvement in a variety of diagnostic modalities (e.g., cardiac computed The tools of quality assessment fit into a comprehensive cycle of tomography, cardiac magnetic resonance imaging, cardiac activities that work to define, measure, and ultimately promote radionuclide imaging, transthoracic and transesophageal echoquality health care (Figure 2).24 Discoveries from basic science cardiography, stress echocardiography) as well as procedural are translated into clinical diagnostics and therapies. These are modalities (e.g., coronary revascularization). then tested in clinical trials to determine efficacy and safety. This Ideally, such AUC would arise from high-quality research evidence is then synthesized into clinical practice guidelines, evaluating the benefits and risks of performing imaging studies which are made available for consumption. A select group of

1.4.6

Chapter 1: General Principles

Example of a Quality Metrics Report from the NCDR CathPCI Executive Summary
Percutaneous Coronary Intervention Quality Measures
Proportion of STEMI Patients with DBT 90 Minutes
My hospital: 65% (rank: 87 of 389; rank percentile: 78) The proportion of primary PCI patients with DBT 90 minutes. The goal is to have a DBT of 90 minutes for all non-transferred patients having STEMI and having primary PCI (detail line: 1,767)

Worse
Lagging 23.9 36.4 50.0 63.0

Better
Leading 76.9

Risk-adjusted Mortality
My hospital: 1.02% (rank: 118 of 366; rank percentile: 68) Your hospitals PCI morality rate adjusted using the ACC-NCDR risk adjusted model (detail line: 1,732)

Lagging 24.4 1.71 1.25

Leading 0.94 0.73

Incidence of Vascular Complications


My hospital: 2.7% (rank: 286 of 401; rank percentile: 68) Includes procedures with at least one vascular complication (detail line: 2,029)

Lagging 4.3 3.0 1.9 1.1

Leading 0.5

Figure 3 Example of a Quality Metrics Report From the NCDR CathPCI Executive Summary
Reproduced with permission from Rumsfeld JS, Dehmer GJ, Brindis RG. The National Cardiovascular Data Registry Its Role in Benchmarking and Improving Quality. US Cardiology 2009;Touch Briefings:11-15.

these guidelines are condensed into performance measures, which are used for benchmarking, public reporting, and pay for performance. Outcomes measures provide assessment of how well this process is achieving its ultimate goals. Any of this information can be fed back into the cycle to guide and refocus quality efforts at all steps.

such as percutaneous coronary intervention (PCI) without onsite surgical backup. They also offer opportunities for post-market device surveillance, particularly for low frequency adverse events.

National Quality Initiatives


National quality initiatives can also be effective. One illustrative example is the Door-to-Balloon Alliance. It is known that PCIs for acute myocardial infarction are grounded in the principle of rapid reperfusion. There is strong evidence that a shorter time from patient presentation (i.e., emergency room door) to coronary artery opening via angioplasty (i.e., balloon inflation in the catheterization laboratory) is associated with better patient outcomes, particularly when these door-to-balloon (D2B) times are less than 90 minutes. However, despite D2B recommended quality measures being in place for years, as of 2006 only 40% of hospitals were able to consistently perform primary PCI in less than 90 minutes. A team of cardiovascular outcomes researchers evaluated hospitals with best practices and identified the key processes of care that were associated with shorter D2B times. Six of these strategies became the core strategies of the D2B Alliance. These are: having emergency medicine physicians activate the catheterization
1.4: Quality of Care: Measurement and Improvement 1.4.7

National Quality Improvement Registry Programs


The foundation of any quality improvement effort is measurement. Without systematic assessment and evaluation, it is difficult to know the quality of various care decisions. Participation in national clinical registries and quality improvement programs, such as the National Cardiovascular Data Registry (NCDR), offers a method for accurately assessing clinical outcomes and offers feedback on how individual hospital and clinician practices compare with their peers. This is done through benchmarking performance against aggregate national or similar hospital outcomes following adjustment for case mix (Figure 3).13,25 Participation in these feedback systems is known to be a critical element in quality improvement. The feedback of process and outcomes data to clinicians pinpoints opportunities to improve clinical performance and quality. For example, they can also be used to help state regulatory agencies oversee the quality of demonstration projects,

The Central Role of Data and Benchmarking in Qualty Improvement

System Changes Data Benchmarking


EMR Standing orders Critical pathways Integrated care

laboratory, having a single call to a central page operator activate the catheterization laboratory, having the emergency department activate the catheterization laboratory while the patient is en route to the hospital, expecting staff to arrive in the catheterization laboratory within 20 minutes after being paged, having an attending cardiologist always on site, and having staff in the emergency department and the catheterization laboratory use real-time data feedback.26 The ACC thereby supported the national D2B Alliance to promote participation by hospitals, physician champions, and strategic partners committed to addressing the D2B challenge. Participating hospitals committed to implementing as many of the six strategies as possible. The goal of the D2B Alliance was to achieve D2B times of <90 minutes for at least 75% of non-transfer primary PCI patients with ST-segment elevation myocardial infarction in all participating hospitals performing primary PCI. This national initiative, guiding and supporting local implementation at more than 1,100 hospitals, led to a significant increase in adoption of the strategies, and there were significant improvement in D2B times nationally, reaching the goal of the initiative.27 National quality initiatives such as the D2B alliance can serve as an important way of disseminating best practices and, thus, bolstering quality of care. Not surprisingly, these initiatives are often most successful if they are tied to national clinical registry programs so that measurement and feedback of performance can be included. They are also successful when they support the specific local quality improvement activities of hospitals and practices.

Clinical Leaders Administrative Support

Figure 4 The Central Role of Data and Benchmarking in Quality Improvement

EMR = electronic medical record. Adapted with permission from Rumsfeld JS, Dehmer GJ, Brindis RG. The National Cardiovascular Data Registry Its role in Benchmarking and Improving Quality. US Cardiology 2009;6:11-5.

The Plan-Do-Study-Act (PDSA) Model for Improvement

Act
Plan the next cycle Decide whether the change can be implemented

De ne the objective, questions and predictions. Plan to answer the questions (Who? What? Where? When?) Plan data collection to answer the questions

Plan

Local Activities
Local quality improvement activities apply the principles as outlined above, including: collection of relevant data, feedback regarding processes and outcomes of care, and implementation of appropriate interventions to improve performance and efficiency. Lessons learned about successful quality improvement from the study of high and low quality hospitals emphasize the following key points. Data must be perceived as valid. Risk-adjustment and benchmarking improve the meaningfulness of data feedback. Data feedback must persist in an iterative form to sustain improved performance. Clinician champions and administrative support is critical to successful performance improvement (Figure 4).10,25 Alignment of local activities with national quality assessment and improvement

Study
Complete the analysis or the data Compare data to predictions Summarize what was learned

Do
Carry out the plan Collect the data Begin analysis of the data

Figure 5 The Plan-Do-Study-Act (PDSA) Model for Improvement


Reproduced with permission from Langley GJ, Nolan KM, Norman CL, Provost LP, Nolan TW. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. New York: Jossey-Bass; 1996.

1.4.8

Chapter 1: General Principles

programs (such as national clinical registry programs) is also important, often providing a solid infrastructure for quality measurement and improvement. Local activities for quality improvement should be iterative and involve breaking down quality efforts into small pieces. Multiple small quality cycles should be occurring in various domains of local health care delivery, involving multidisciplinary members of the team. The quality initiatives should be supported by administration and aligned with external entities (i.e., regulatory agencies and national quality improvement initiatives). For example, a hospital with a high risk-adjusted mortality rate among its patients with acute myocardial infarction cannot set on a single course of action to fix this problem. Instead, it is necessary to have an integrated approach involving multiple smaller initiatives within the continuum of care. This could include community education, emergency medical services, the emergency department, the interventional catheterization laboratory, in-hospital care, transitional services, and ambulatory follow up. Measurement within each level should target areas for improvement. At the individual and local level, the Institute for Health Care Improvement (IHI) promotes the Model for Improvement, developed by Associates in Process Improvement.28 The model organizes quality improvement into actionable parts. 1. Set Aims: These should be small goals that are targeted to a defined group of patients. They should be time-specific and measureable. 2. Establish Measures: Pick a quantitative measure that can determine if a specific change leads to an improvement in quality. 3. Select Changes: All improvement requires making changes but not all changes result in improvement. Clinicians and organizations must identify the changes that they believe are most likely to result in improvement. 4. Test Changes: Once the first three fundamental questions have been answered, the Plan-Do-Study-Act (PDSA) cycle should be used to test and implement changes in real work settings. The PDSA cycle uses action-oriented learning to determine if the change is an improvement. This is done by planning it, trying it, observing the results, and acting on what is learned (Figure 5). Including the right people on a process improvement team is critical to a successful improvement effort. Teams vary in size and composition, but typically involve multidisciplinary representation.

elements leading to quality improvement. If you do not measure it, you will not improve it.

Key Points
Health care quality is highly relevant to patients, clinicians, and society. The highest quality health care is that which is effective, safe, timely, efficient, equitable and patient centered. The major domains of quality assessment are structure, process, and outcomes (Donabedians triad). Key tools for defining and measuring quality of care include: evidence, data standards, clinical practice guidelines, quality metrics, performance measures, and appropriateness criteria. Quality improvement requires accurate data collection, risk-adjustment and benchmarking to make performance measurement meaningful, persistent and iterative cycles of quality improvement, clinician champions, and a supportive organizational context. National clinical registry programs such as the NCDR utilize data standards, standardized tools for data collection, riskadjustment, and benchmarking.

References
1. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138:273-87. Orszag PR. The Overuse, Underuse, and Misuse of Health Care: Testimony before the Committee on Finance, United States Senate, July 17, 2008. Congressional Budget Office, Washington, DC; 2008. The Dartmouth Institute for Health Policy and Clinical Practice. The Dartmouth Atlas of Health Care. 2011. Available at: http://www. dartmouthatlas.org. Accessed 11/30/2011. Sutherland JM, Fisher ES, Skinner JS. Getting past denial--the high cost of health care in the United States. N Engl J Med 2009;361:1227-30. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, DC: National Academy Press; 2000. Leape LL. Reporting of adverse events. N Engl J Med 2002;347:1633-8. Institute of Medicine Committee on Quality of Health Care in America: Crossing the Quality Chasm: A New Health System for the 21st Century. Washington DC: National Academy Press; 2001. Donabedian A. Explorations in Quality Assessment and Monitoring, Volume 1. The Definition of Quality and Approaches to its Assessment. Ann Arbor, MI: Health Administration Press; 1980. Porter ME. What is value in health care? N Engl J Med 2010;363:2477-81.

2.

3.

4.

5.

6. 7.

8.

Conclusion
For clinicians who want to deliver the best possible care, be graded and reimbursed appropriately, and maintain certification and licensure, understanding quality and being engaged in quality measurement and improvement must become central to clinical practice. Feedback of process and outcomes are critical

9.

10. Majumdar SR, McAlister FA, Furberg CD. From knowledge to practice in chronic cardiovascular disease: a long and winding road. J Am Coll Cardiol 2004;43:1738-42. 11. Guyatt G, Rennie D, Meade MO, Cook DJ. Users Guide to the Medical Literature: A Manual for Evidence-Based Clinical Practice. 1.4: Quality of Care: Measurement and Improvement 1.4.9

2nd ed. New York: McGraw-Hill Professional; 2008. 12. Cannon CP, Battler A, Brindis RG, et al. American College of Cardiology key data elements and definitions for measuring the clinical management and outcomes of patients with acute coronary syndromes. A report of the American College of Cardiology Task Force on Clinical Data Standards (Acute Coronary Syndromes Writing Committee). J Am Coll Cardiol 2001;38:2114-30. 13. Bufalino VJ, Masoudi FA, Stranne SK, et al. The American Heart Associations recommendations for expanding the applications of existing and future clinical registries: a policy statement from the American Heart Association. Circulation 2011;123:2167-79. 14. American College of Cardiology. CardioSource: Guidelines and Quality Standards. 2011. Available at: http://www.cardiosource. org/science-and-quality/practice-guidelines-and-quality-standards. aspx. Accessed 11/30/2011. 15. Gibbons RJ, Smith S, Antman E. American College of Cardiology/ American Heart Association clinical practice guidelines: Part I: where do they come from? Circulation 2003;107:2979-86. 16. Gibbons RJ, Smith SC Jr, Antman E. American College of Cardiology/American Heart Association clinical practice guidelines: Part II: evolutionary changes in a continuous quality improvement project. Circulation 2003;107:3101-7. 17. Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009;301:831-41. 18. Bonow RO, Masoudi FA, Rumsfeld JS, et al. ACC/AHA classification of care metrics: performance measures and quality metrics: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures. Circulation 2008;118:2662-6. 19. Spertus JA, Eagle KA, Krumholz HM, Mitchell KR, Normand SL. American College of Cardiology and American Heart Association methodology for the selection and creation of performance measures for quantifying the quality of cardiovascular care. Circulation 2005;111:1703-12. 20. Krumholz HM, Normand SL, Spertus JA, Shahian DM, Bradley EH. Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement. Health Aff (Millwood) 2007;26:75-85. 21. US Department of Health and Human Services. Hospital Compare. 2011. Available at: www.hospitalcompare.hhs.gov. Accessed 11/30/2011. 22. Peterson ED, Delong ER, Masoudi FA, et al. ACCF/AHA 2010 Position Statement on Composite Measures for Healthcare Performance Assessment: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Performance Measures (Writing Committee to Develop a Position Statement on Composite Measures). J Am Coll Cardiol 2010;55:1755-66. 23. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/ AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: a report by the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, and the American Society of Nuclear Cardiology Endorsed by the American Society of Echocardiography, the Heart Failure Society of America, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2009;53:530-53. 24. Califf RM, Harrington RA, Madre LK, Peterson ED, Roth D, Schul-

man KA. Curbing the cardiovascular disease epidemic: aligning industry, government, payers, and academics. Health Aff (Millwood) 2007;26:62-74. 25. Brindis RG, Dehmer GJ, Rumsfeld JS. The National Cardiovascular Data Registry -- Its Role in Benchmarking and Improving Quality. US Cardiology. 2009;www.touchcardiology.com:11-15. 26. Bradley EH, Herrin J, Wang Y, et al. Strategies for reducing the door-to-balloon time in acute myocardial infarction. N Engl J Med 2006;355:2308-20. 27. Bradley EH, Nallamothu BK, Herrin J, et al. National efforts to improve door-to-balloon time results from the Door-to-Balloon Alliance. J Am Coll Cardiol 2009;54:2423-9. 28. Langley GJ, Nolan KM, Norman CL, Provost LP, Nolan TW. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. New York: Jossey-Bass; 1996.

1.4.10

Chapter 1: General Principles

S-ar putea să vă placă și