Documente Academic
Documente Profesional
Documente Cultură
1
Introduction
Clinicians may perceive the topic of measuring and improving quality of care as one that is largely under the purview of administrators and health policy makers. Instead, clinicians should be highly motivated to understand how to measure and improve quality of care for a variety of reasons, including: Quality of care is evidence-based medicine. Quality measurement and improvement initiatives help clinicians to stay current with the best available evidence for therapeutics and care delivery, thereby supporting the practice of evidence-based medicine. Quality of care is central to lifelong learning, certification, and licensure. The maintenance of certifications, as well as state licensure activities, is increasingly centered on quality measurement and improvement. This may include the demonstration of practice improvement. Quality of care is at the center of health care reform. Consumer groups, hospitals, health care systems, payers, states, the federal government, and other stakeholders are heavily focused on quality of care. There is a particular focus on unexplained variation in care delivery. This variation is viewed as a marker of variation in quality, as a major contributor to health care expenditures, and as a target for health care reform. Quality of care is increasingly about accountability. Public reporting and performance-based reimbursement are increasingly based on quality measures for both processes of care and outcomes of care. Quality of care drives health care system improvement. Quality measurement and improvement initiatives inform changes to the health care system at the practice, hospital, and national levels. This serves to optimize health care delivery and thus the best care possible for patients.
One of the most compelling arguments implicating the efficiency of health care in the United States derives from the marked geographic variation in per capita health care spending, without
1.4: Quality of Care: Measurement and Improvement 1.4.1
obvious correlation to measures of health care quality or patient outcomes. The current substantial growth in the performance of cardiovascular testing and procedures has been characterized by increasing regional differences, as documented among Medicare beneficiaries in the Dartmouth Atlas of Cardiovascular Health Care.3 Yet, those regional differences in use do not appear to translate to significant differences in the performance of well-accepted standards of care or the health of those communities.4 Furthermore, the Institute of Medicine (IOM) and others have issued several reports documenting the extent of medical errors and their consequences.5,6 Clinicians must recognize that their actions, both in terms of errors of omission (i.e., not doing things they should) and errors of commission (i.e., doing things they should not), are under increasing scrutiny.
Process refers to the way in which care is delivered. The ideal process is to do the right thing for the right patient at the right time. Processes refer to the actions performed in the delivery of patient care, including the timing and technical competency of their delivery. Process of care measures often focus on patient selection and administration of therapies (e.g., prescription of aspirin for patients with acute myocardial infarction). Outcomes refer to the results of care. These are measures of the end-results of health care delivery. From the patient, clinician, and societal perspectives, primary outcomes concepts are reflected in just two questions. The first question is related to mortality versus survival: Did the care/therapy delivered help patients live longer? The second question is related to morbidity versus quality of life: Did the care/therapy improve patient health status and/or make patients feel better? For a variety of reasons, outcomes measures have focused largely on survival, which is objective and easy to obtain, or on surrogate measures (e.g., blood pressure). However, there is a growing recognition of the importance of patient-centered outcomes, including patient health status or quality-of-life measurements such as angina burden (e.g., Seattle Angina Questionnaire). The Donabedian model proposes that each component has a direct influence on the next. In other words, the structural attributes of the system in which care occurs (i.e., resources and administration) dictate processes of care (i.e., delivery of therapeutics) which in turn affect the outcomes (i.e., goal achievement). Importantly, the patient is at the center, with the ultimate goal of improving outcomes that are important to patients and their families. It is also important to note that patients have different demographic and clinical profiles (e.g., comorbidities, disease severity). Thus, clinicians and hospitals care for different case-mixes of patients. As such, it is generally true that valid measures of patient outcomes, especially for comparison among hospitals or other groups, must be risk-adjusted (i.e., case-mix adjusted).
Process Discrete processes of care that imply that clinicians are in error if they do not care for Performance patients according to these clinical standards; must also allow for practical identi cation Measures of those patients for whom a speci c action should be taken (a clear denominator), easy determination of whether or not the measure has been performed (a clear numerator), and opportunities for timely feedback Appropriate Use Criteria Identify common, prototypical patient subgroups for which expert clinicians assess the bene ts and risks of a test or procedure on patient outcomes (score 1-9); primary goal is to reduce overuse, thereby improving safety and ef ciency Measures of health that are important to patients and are through to be affected by processes of care; generally require risk-standardization to account for case mix
Outcomes Measures
ered largely on a system level. As such, quality improvement is achieved through a systems approach that provides a supportive environment for the delivery of health care. Continuous quality improvement is an organized, scientific process for evaluating, planning, improving, and controlling quality. The following sections describe the pieces of continuous quality improvement and how they fit together to promote optimal care delivery both at the national and local levels.
Evidence-based medicine involves two fundamental principles.11 First, a hierarchy of evidence exists from which to guide clinical decision making. While individual clinical observations can generate important hypotheses, unsystematic clinical observations are limited by sample size and deficiencies in the ability to make accurate causal inferences. Thus, only systematic approaches to data collection and analysis are generally considered as evidence to guide clinical decisions. These systematic approaches, listed in increasing order of strength of evidence for informing clinical decision making, include: physiological experiments, case series, cross-sectional studies, case-control studies, retrospective observational cohorts, prospective observational cohorts, and randomized clinical trials. Stronger study designs minimize bias and improve power, leading to improved evidence to support clinical decision making. It is important to note that this evidence hierarchy is not absolute. For example, randomized clinical trials can suffer from studying only highly selected patients, and thus may have limited generalizability. Similarly, observational studies must be cautious of unmeasured factors that can confound the interpretation of attribution, yet may give a broader assessment of care and outcomes in routine clinical practice. Thus, a synthesis of all
1.4: Quality of Care: Measurement and Improvement 1.4.3
The Evidence
The determination of care quality is grounded in clinical evidence.
ClassI
Benet >> Risk Additional studies with focused objectives needed It is reasonable to perform procedure/ administer treatment
Class IIa
Class IIb
Benet Risk Additional studies with broad objectives needed; additional registry data would be helpful Procedure/Treatment may be considered
Level A
Multiple populations evaluated* Data derived from multiple randomized clinical trials or meta-analyses
Recommendation in Recommendations Recommendation that favor of treatment or usefulness/ef cacy procedure or treatment procedure being useful/ less well established is not useful/effective effective and may be harmful Suf cient evidence from Greater con icting multiple randomized evidence from multiple Suf cient evidence from Some con icting trials or meta-analyses randomized trials or evidence from multiple multiple randomized meta-analyses randomized trials or trials or meta-analyses meta-analyses Recommendation that procedure or treatment is useful/effective Evidence from single randomized trial or nonrandomized studies Recommendation in Recommendations Recommendation that favor of treatment or usefulness/ef cacy procedure or treatment procedure being useful/ less well established is not useful/effective effective and may be har mful Greater con icting evidence from single Some con icting Evidence from single randomized trial or evidence from single randomized trial or nonrandomized studies randomized trial or nonnonrandomized studies randomized studies Recommendation in Recommendations favor of treatment or usefulness/ef cacy procedure being useful/ less well established effective Only diverging expert Only diverging expert opinion, case studies, opinion, case studies, or standard of care or standard of care Recommendation that procedure or treatment is not useful/effective and may be harmful Only expert opinion, case studies, or standard of care
Level B
Limited populations evaluated* Data derived from a single randomized trial or nonrandomized studies
Level C
Recommendation that procedure or treatment Very limited populations is useful/effective evaluated* Only consensus opinion Only expert opinion, of experts, case studies, case studies, or or standard of care standard of care
available evidence, such as in a systematic review and/or evidence-based clinical practice guideline, may enhance the assessment of benefits and risks of a given therapy. A systematic review will provide guidance for care decisions above and beyond any single study. The second fundamental principle is that evidence alone will never be sufficient to make a clinical decision. Decision makers must always integrate and trade the benefits, risks, inconveniences, and costs associated with alternate management strategies. This should be done within the context of patients goals, values, and preferences.
in association with the American Heart Association (AHA), has implemented Clinical Data Standards, the lexicon needed to achieve commonality and consistency in definitions in many areas of cardiovascular disease.12 Standardized definitions allow accurate comparisons between multiple relevant clinical trials as well as clinical outcomes collected through clinical registry programs.13
Data Standards
Standardized sets of definitions, nomenclature, and data elements facilitate accurate communication and fair comparison. They help avoid the Tower of Babel syndrome with the inability to accurately compare clinical trials and other outcomes assessments due to differing definitions of clinical status and adverse outcomes. The American College of Cardiology (ACC),
1.4.4
acute myocardial infarction, unstable angina, chronic stable angina, coronary revascularization, heart failure, supraventricular arrhythmias, atrial fibrillation, implantation of pacemakers, and antiarrhythmia devices.14 These practice guidelines are intended to assist health care providers in clinical decision making through describing generally acceptable approaches for the diagnosis, management, or prevention of disease states.15-17 Figure 1provides a framework for evaluating various procedures and treatments. The framework includes both levels of evidence and types of evidence.16 Levels of Evidence: Recommendations are given one of the following indication classifications based on the evaluation of evidence by a panel of guidelines experts. Class I: procedure or treatment should be performed or administered; the benefittorisk ratio is favorable. Class IIa: it is reasonable to perform the procedure or treatment; benefit to risk ratio is probably favorable. Class IIb: procedure of treatment may be considered; benefit to risk ratio is unknown. Class III: the procedure or treatment should not be performed; no benefit or riskoutweighs benefit.
ity in routine practice; 2) practical identification of those patients for whom a specific action should be taken (a clear denominator); 3) easy determination of whether or not the measure has been performed (a clear numerator); 4) adherence to the measure results in meaningful improvements in clinically meaningful outcomes; and 5) opportunities for timely feedback to clinicians and institutions to promote continuous quality improvement.19 Process Performance Measures Process performance measures are distilled from clinical guideline therapeutic recommendations, generally capturing those Class I or Class III, Level of Evidence A recommendations for which the evidence is particularly strong. Process performance measures describe discrete processes of care that are explicit diagnostic or therapeutic actions to be performed or not performed (e.g., the provision of aspirin for acute myocardial infarction). The implication is that clinicians are in error if they do not follow these care processes or do not document specific reasons for disregarding these recommendations. Outcome Performance Measures Outcome performance measures are being increasingly used as performance measures. Adding outcomes measures to process measures has important benefits. For example, process measures, even when reported together, capture a small fraction of the care delivered; in contrast, outcomes measures, such as mortality or health-related quality of life, should integrate the totality of care that a patient receives.20 The government website, Hospital Compare, reports 30-day risk-standardized mortality and rehospitalization rates for fee-for-service Medicare beneficiaries after hospitalization for heart failure, acute myocardial infarction, or pneumonia.21 These statistics are used for reimbursement purposes. Critiques of outcomes measures include, for example, that the methods for risk-standardization are not sufficiently fair to account for important differences in case mix. Also, outcomes measures do not tell clinicians and institutions specifically what they are doing correctly or incorrectly. Therefore, risk-standardized outcomes measures should be combined with detailed measures of structure and process performance, thereby providing clinicians and institutions with audit and feedback on their overall performance alongside data highlighting those areas in particular need of quality improvement activities. Composite Measures Composite measures have been constructed and deployed to address the proliferation of performance measures and the need to ensure that these measures comprehensively represent health care quality.22 Composite measures utilize data reduction in order to simplify presentation and interpretation. They also promote scope expansion to better integrate multiple metrics into a more comprehensive assessment of provider performance. However, these advantages come at a cost. Standard psychometric properties of composites can be more complex to determine, methods for scoring (e.g., all-or-none vs. any vs. weighting) can lead to different conclusions, and problems with missing data
Types of Evidence: The weight of evidence to support a given recommendation is listed as A, B, or C. The highest level of evidence, A, implies data derived from multiple randomized trials, while the lowest level of evidence, C, reflects the consensus opinion of experts, case studies, or standard of care. Although we would like guidelines to be based on the highest level of evidence in the hierarchy, multiple factors (e.g., the difficulty of conducting large randomized trials) limit the extent to which the wide array of clinical decisions can be strongly recommended. Of the 16 ACC/AHA clinical practice guidelines that report levels of evidence in September 2008, only 11% of 2,711 recommendations were classified as Level of Evidence A, whereas 46% were level C.17
Performance Measures
Performance, or care accountability, measures are those process, structure, efficiency, and outcome measures that have been developed using ACC/AHA methodology. This includes the process of public comment and peer review and the specific designation as a performance measure by the ACC/AHA Task Force on Performance Measures.18,19 This may occur in collaboration with other national practice organizations and federal agencies, such as the National Quality Forum (NQF), Centers for Medicare and Medicaid Services (CMS), or the Joint Commission on Accreditation of Health Care Organizations. Performance measures must have a number of qualities that allow them to be used for both continuous quality improvement as well as accountability and reimbursement, including: 1) face valid-
1.4.5
can be amplified.
Quality Metrics
Quality metrics are those measures that have been developed to support self-assessment and quality improvement at the provider, hospital, and/or health care system level.18 These metrics are often a major focus of clinical registry and quality improvement programs.13 These metrics may not have been formally developed using the ACC/AHA performance measure methodology. However, they may be identified as preliminary, candidate, test, evolving, or quality measures, which indicates that they may be worthy of consideration for further development into performance measures. Quality metrics may not meet all specifications of formal performance measures used in public reporting and accountability, but can still represent valuable tools to aid clinicians and hospitals in improving quality of care and enhancing patient outcomes.
2
NIH Roadmap
4
Network information
1
FDA Critical Path
5 6
Priorities and processes
Empirical ethics
Discovery science
Outcomes
Clinical trials
7
Inclusiveness
8
Use for feedback on priorities
12
Transparency to consumers
11
Pay-forperformance
Performance measures
10
Evaluation of speed and uency
9
Con ict-of-interest management
1.4.6
Example of a Quality Metrics Report from the NCDR CathPCI Executive Summary
Percutaneous Coronary Intervention Quality Measures
Proportion of STEMI Patients with DBT 90 Minutes
My hospital: 65% (rank: 87 of 389; rank percentile: 78) The proportion of primary PCI patients with DBT 90 minutes. The goal is to have a DBT of 90 minutes for all non-transferred patients having STEMI and having primary PCI (detail line: 1,767)
Worse
Lagging 23.9 36.4 50.0 63.0
Better
Leading 76.9
Risk-adjusted Mortality
My hospital: 1.02% (rank: 118 of 366; rank percentile: 68) Your hospitals PCI morality rate adjusted using the ACC-NCDR risk adjusted model (detail line: 1,732)
Leading 0.5
Figure 3 Example of a Quality Metrics Report From the NCDR CathPCI Executive Summary
Reproduced with permission from Rumsfeld JS, Dehmer GJ, Brindis RG. The National Cardiovascular Data Registry Its Role in Benchmarking and Improving Quality. US Cardiology 2009;Touch Briefings:11-15.
these guidelines are condensed into performance measures, which are used for benchmarking, public reporting, and pay for performance. Outcomes measures provide assessment of how well this process is achieving its ultimate goals. Any of this information can be fed back into the cycle to guide and refocus quality efforts at all steps.
such as percutaneous coronary intervention (PCI) without onsite surgical backup. They also offer opportunities for post-market device surveillance, particularly for low frequency adverse events.
laboratory, having a single call to a central page operator activate the catheterization laboratory, having the emergency department activate the catheterization laboratory while the patient is en route to the hospital, expecting staff to arrive in the catheterization laboratory within 20 minutes after being paged, having an attending cardiologist always on site, and having staff in the emergency department and the catheterization laboratory use real-time data feedback.26 The ACC thereby supported the national D2B Alliance to promote participation by hospitals, physician champions, and strategic partners committed to addressing the D2B challenge. Participating hospitals committed to implementing as many of the six strategies as possible. The goal of the D2B Alliance was to achieve D2B times of <90 minutes for at least 75% of non-transfer primary PCI patients with ST-segment elevation myocardial infarction in all participating hospitals performing primary PCI. This national initiative, guiding and supporting local implementation at more than 1,100 hospitals, led to a significant increase in adoption of the strategies, and there were significant improvement in D2B times nationally, reaching the goal of the initiative.27 National quality initiatives such as the D2B alliance can serve as an important way of disseminating best practices and, thus, bolstering quality of care. Not surprisingly, these initiatives are often most successful if they are tied to national clinical registry programs so that measurement and feedback of performance can be included. They are also successful when they support the specific local quality improvement activities of hospitals and practices.
EMR = electronic medical record. Adapted with permission from Rumsfeld JS, Dehmer GJ, Brindis RG. The National Cardiovascular Data Registry Its role in Benchmarking and Improving Quality. US Cardiology 2009;6:11-5.
Act
Plan the next cycle Decide whether the change can be implemented
De ne the objective, questions and predictions. Plan to answer the questions (Who? What? Where? When?) Plan data collection to answer the questions
Plan
Local Activities
Local quality improvement activities apply the principles as outlined above, including: collection of relevant data, feedback regarding processes and outcomes of care, and implementation of appropriate interventions to improve performance and efficiency. Lessons learned about successful quality improvement from the study of high and low quality hospitals emphasize the following key points. Data must be perceived as valid. Risk-adjustment and benchmarking improve the meaningfulness of data feedback. Data feedback must persist in an iterative form to sustain improved performance. Clinician champions and administrative support is critical to successful performance improvement (Figure 4).10,25 Alignment of local activities with national quality assessment and improvement
Study
Complete the analysis or the data Compare data to predictions Summarize what was learned
Do
Carry out the plan Collect the data Begin analysis of the data
1.4.8
programs (such as national clinical registry programs) is also important, often providing a solid infrastructure for quality measurement and improvement. Local activities for quality improvement should be iterative and involve breaking down quality efforts into small pieces. Multiple small quality cycles should be occurring in various domains of local health care delivery, involving multidisciplinary members of the team. The quality initiatives should be supported by administration and aligned with external entities (i.e., regulatory agencies and national quality improvement initiatives). For example, a hospital with a high risk-adjusted mortality rate among its patients with acute myocardial infarction cannot set on a single course of action to fix this problem. Instead, it is necessary to have an integrated approach involving multiple smaller initiatives within the continuum of care. This could include community education, emergency medical services, the emergency department, the interventional catheterization laboratory, in-hospital care, transitional services, and ambulatory follow up. Measurement within each level should target areas for improvement. At the individual and local level, the Institute for Health Care Improvement (IHI) promotes the Model for Improvement, developed by Associates in Process Improvement.28 The model organizes quality improvement into actionable parts. 1. Set Aims: These should be small goals that are targeted to a defined group of patients. They should be time-specific and measureable. 2. Establish Measures: Pick a quantitative measure that can determine if a specific change leads to an improvement in quality. 3. Select Changes: All improvement requires making changes but not all changes result in improvement. Clinicians and organizations must identify the changes that they believe are most likely to result in improvement. 4. Test Changes: Once the first three fundamental questions have been answered, the Plan-Do-Study-Act (PDSA) cycle should be used to test and implement changes in real work settings. The PDSA cycle uses action-oriented learning to determine if the change is an improvement. This is done by planning it, trying it, observing the results, and acting on what is learned (Figure 5). Including the right people on a process improvement team is critical to a successful improvement effort. Teams vary in size and composition, but typically involve multidisciplinary representation.
elements leading to quality improvement. If you do not measure it, you will not improve it.
Key Points
Health care quality is highly relevant to patients, clinicians, and society. The highest quality health care is that which is effective, safe, timely, efficient, equitable and patient centered. The major domains of quality assessment are structure, process, and outcomes (Donabedians triad). Key tools for defining and measuring quality of care include: evidence, data standards, clinical practice guidelines, quality metrics, performance measures, and appropriateness criteria. Quality improvement requires accurate data collection, risk-adjustment and benchmarking to make performance measurement meaningful, persistent and iterative cycles of quality improvement, clinician champions, and a supportive organizational context. National clinical registry programs such as the NCDR utilize data standards, standardized tools for data collection, riskadjustment, and benchmarking.
References
1. Fisher ES, Wennberg DE, Stukel TA, et al. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med 2003;138:273-87. Orszag PR. The Overuse, Underuse, and Misuse of Health Care: Testimony before the Committee on Finance, United States Senate, July 17, 2008. Congressional Budget Office, Washington, DC; 2008. The Dartmouth Institute for Health Policy and Clinical Practice. The Dartmouth Atlas of Health Care. 2011. Available at: http://www. dartmouthatlas.org. Accessed 11/30/2011. Sutherland JM, Fisher ES, Skinner JS. Getting past denial--the high cost of health care in the United States. N Engl J Med 2009;361:1227-30. Kohn LT, Corrigan JM, Donaldson MS. To err is human: building a safer health system. Washington, DC: National Academy Press; 2000. Leape LL. Reporting of adverse events. N Engl J Med 2002;347:1633-8. Institute of Medicine Committee on Quality of Health Care in America: Crossing the Quality Chasm: A New Health System for the 21st Century. Washington DC: National Academy Press; 2001. Donabedian A. Explorations in Quality Assessment and Monitoring, Volume 1. The Definition of Quality and Approaches to its Assessment. Ann Arbor, MI: Health Administration Press; 1980. Porter ME. What is value in health care? N Engl J Med 2010;363:2477-81.
2.
3.
4.
5.
6. 7.
8.
Conclusion
For clinicians who want to deliver the best possible care, be graded and reimbursed appropriately, and maintain certification and licensure, understanding quality and being engaged in quality measurement and improvement must become central to clinical practice. Feedback of process and outcomes are critical
9.
10. Majumdar SR, McAlister FA, Furberg CD. From knowledge to practice in chronic cardiovascular disease: a long and winding road. J Am Coll Cardiol 2004;43:1738-42. 11. Guyatt G, Rennie D, Meade MO, Cook DJ. Users Guide to the Medical Literature: A Manual for Evidence-Based Clinical Practice. 1.4: Quality of Care: Measurement and Improvement 1.4.9
2nd ed. New York: McGraw-Hill Professional; 2008. 12. Cannon CP, Battler A, Brindis RG, et al. American College of Cardiology key data elements and definitions for measuring the clinical management and outcomes of patients with acute coronary syndromes. A report of the American College of Cardiology Task Force on Clinical Data Standards (Acute Coronary Syndromes Writing Committee). J Am Coll Cardiol 2001;38:2114-30. 13. Bufalino VJ, Masoudi FA, Stranne SK, et al. The American Heart Associations recommendations for expanding the applications of existing and future clinical registries: a policy statement from the American Heart Association. Circulation 2011;123:2167-79. 14. American College of Cardiology. CardioSource: Guidelines and Quality Standards. 2011. Available at: http://www.cardiosource. org/science-and-quality/practice-guidelines-and-quality-standards. aspx. Accessed 11/30/2011. 15. Gibbons RJ, Smith S, Antman E. American College of Cardiology/ American Heart Association clinical practice guidelines: Part I: where do they come from? Circulation 2003;107:2979-86. 16. Gibbons RJ, Smith SC Jr, Antman E. American College of Cardiology/American Heart Association clinical practice guidelines: Part II: evolutionary changes in a continuous quality improvement project. Circulation 2003;107:3101-7. 17. Tricoci P, Allen JM, Kramer JM, Califf RM, Smith SC Jr. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA 2009;301:831-41. 18. Bonow RO, Masoudi FA, Rumsfeld JS, et al. ACC/AHA classification of care metrics: performance measures and quality metrics: a report of the American College of Cardiology/American Heart Association Task Force on Performance Measures. Circulation 2008;118:2662-6. 19. Spertus JA, Eagle KA, Krumholz HM, Mitchell KR, Normand SL. American College of Cardiology and American Heart Association methodology for the selection and creation of performance measures for quantifying the quality of cardiovascular care. Circulation 2005;111:1703-12. 20. Krumholz HM, Normand SL, Spertus JA, Shahian DM, Bradley EH. Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement. Health Aff (Millwood) 2007;26:75-85. 21. US Department of Health and Human Services. Hospital Compare. 2011. Available at: www.hospitalcompare.hhs.gov. Accessed 11/30/2011. 22. Peterson ED, Delong ER, Masoudi FA, et al. ACCF/AHA 2010 Position Statement on Composite Measures for Healthcare Performance Assessment: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Performance Measures (Writing Committee to Develop a Position Statement on Composite Measures). J Am Coll Cardiol 2010;55:1755-66. 23. Patel MR, Dehmer GJ, Hirshfeld JW, et al. ACCF/SCAI/STS/AATS/ AHA/ASNC 2009 Appropriateness Criteria for Coronary Revascularization: a report by the American College of Cardiology Foundation Appropriateness Criteria Task Force, Society for Cardiovascular Angiography and Interventions, Society of Thoracic Surgeons, American Association for Thoracic Surgery, American Heart Association, and the American Society of Nuclear Cardiology Endorsed by the American Society of Echocardiography, the Heart Failure Society of America, and the Society of Cardiovascular Computed Tomography. J Am Coll Cardiol 2009;53:530-53. 24. Califf RM, Harrington RA, Madre LK, Peterson ED, Roth D, Schul-
man KA. Curbing the cardiovascular disease epidemic: aligning industry, government, payers, and academics. Health Aff (Millwood) 2007;26:62-74. 25. Brindis RG, Dehmer GJ, Rumsfeld JS. The National Cardiovascular Data Registry -- Its Role in Benchmarking and Improving Quality. US Cardiology. 2009;www.touchcardiology.com:11-15. 26. Bradley EH, Herrin J, Wang Y, et al. Strategies for reducing the door-to-balloon time in acute myocardial infarction. N Engl J Med 2006;355:2308-20. 27. Bradley EH, Nallamothu BK, Herrin J, et al. National efforts to improve door-to-balloon time results from the Door-to-Balloon Alliance. J Am Coll Cardiol 2009;54:2423-9. 28. Langley GJ, Nolan KM, Norman CL, Provost LP, Nolan TW. The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. New York: Jossey-Bass; 1996.
1.4.10