Sunteți pe pagina 1din 37

http://www.tnc.or.th/knowledge/know03.

html จรรยาบรรณวิชาชีพ
http://www.tnc.or.th/file_attach/10Sep200928-AttachFile1252569748.pdf
สมรรถนะของพยาบาลวิชาชีพ (สมรรถนะด้านที ่ ๓)
Webster's Revised Unabridged Dictionary, edited by Noah Porter, published
by G & C. Merriam Co., 1913
The public-domain Webster's 1913 Dictionary, edited for online use by
Patrick J. Cassidy.
On this page: accordion to accumulation.
http://www.answers.com/topic/accountability
The state of being accountable; liability to be called on to render an
account; the obligation to bear the consequences for failure to perform as
expected; accountableness.
Individual or departmental responsibility to perform a certain function.
Accountability may be dictated or implied by law, regulation, or agreement.
For example, an auditor will be held accountable to financial statement
users relying on the audited financial statements for failure to uncover
corporate Fraud because of negligence in applying Generally Accepted
Auditing Standards (GAAS).

http://www.shrm.org/hrdisciplines/Diversity/diversity_mgmt_plan/Pages/measur
ement.aspx
What is measurement & accountability?
When businesses move in any new direction, they expect to know if their
efforts have achieved the desired results, and if not, who will be
responsible for a correcting the methodology so that those desired results
can be achieved. There is no reason to believe that CEOs and other
business leaders should treat a diversity plan any differently. Measurement

1
is simply an activity that determines whether or not your efforts have been
successful. Accountability is the act of putting the responsibility for the
Diversity Plan’s success in the hands of a single person or small group.
Why is measurement & accountability important?
An attempt to shift the culture of an organization is a Herculean task.
Typically, the results that one expects from a diversity & inclusion change
initiative are visible increases of diverse employees at all levels within an
organization. However, it is important to remember that a) not all diversity
is visible, and, more important, b) increased numbers are a lagging
indicator of success. An organization can be doing everything right in
terms of building an inclusive culture that will allow people of all
backgrounds equal access to opportunities, advancement and success –
and the representation of women and people of color, particularly at senior
levels of the organization, might not significantly change until years later.
This doesn’t mean that the organization is not succeeding. But without
knowing what to measure and how to measure it, those working toward
and/or funding your diversity initiatives might grow impatient or hopeless if
measures of success are not tracked and communicated as the initiative is
implemented. It’s also possible that your efforts, well-planned and expertly
executed, are not resulting in the outcomes that you and your leaders
desire. If this is the case, it’s important to know this sooner than later, so
that your plan can be modified as needed. In this situation, it’s also
important to allow responsibility for the effort to rest with a single person
or small group, who are incentivized toward success and against failure, to
ensure the eventual achievements you seek.
What does measurement & accountability consist of?

2
For many who are working with diversity & inclusion for the first time, the
only measurements that occur to them are hard numbers: How many
women and people of color work for our organization, and how many
have senior leadership positions? These numbers are important, but it is
worth measuring that they only partially measure diversity (as not all
diversity is visible and reportable) and do not measure inclusion at all. For
a more complete picture, many other items can be measured. For
instance, a Strategic Diversity Management Plan can measure who is
being sourced, recruited, hired and advanced within the organization. The
plan can also measure who is leaving after one or two years within the
organization, and how do the turnover rates of women and people of color
compare with the turnover of the organization’s general population. The
plan should also examine employee engagement surveys, to track whether
or not employees feel as though they work in diverse and inclusive
environments, whether or not they believe they can advance or succeed in
alignment with their personal ambitions, and whether or not their
contributions are heard and valued within the organization. Your Strategic
Management Diversity Plan should state all of these measurement
activities and targets for success at the outset of your plan, so that those
who are working for and funding your efforts will know what to expect in
terms of project updates. If expectations are managed correctly, leaders
and workers will feel a sense of satisfaction and accomplishment with
regard to diversity & inclusion, even if the population’s visible diversity
does not immediately experience significant change.
Your plan should also explain who is responsible for tracking these
measurements and making course corrections if the organization appears
to be headed in the wrong direction. Business leaders expect such

3
clauses in any other major initiative they approve, and inclusion in your
Strategic Diversity Management Plan will greatly improve its chances of
gaining leadership commitment.

What Is Accountability in Health Care?


Ezekiel J. Emanuel, MD, PhD; and
Linda L. Emanuel, MD, PhD
http://annals.highwire.org/content/124/2/229.abstract
Accountability has become a major issue in health care.Accountability
entails the procedures and processes by which one party justifies and
takes responsibility for its activities. The concept of accountability contains
three essential components: 1) the loci of accountability—health care
consists of at least 11 different parties that can be held accountable or
hold others accountable; 2) the domains of accountability—in health care,
parties can be held accountable for as many as six activities: professional
competence, legal and ethical conduct, financial performance, adequacy of
access, public health promotion, and community benefit; and 3) the
procedures of accountability, including formal and informal procedures for
evaluating compliance with domains and for disseminating the evaluation
and responses by the accountable parties.
Different models of accountability stress different domains, evaluative
criteria, loci, and procedures.We characterize and compare three dominant
models of accountability: 1) the professional model, in which the individual
physician and patient participate in shared decision making and physicians
are held accountable to professional colleagues and to patients; 2) the
economic model, in which the market is brought to bear in health care
and accountability is mediated through consumer choice of providers; and

4
3) the political model, in which physicians and patients interact as citizen-
members within a community and in which physicians are accountable to
a governing board elected from the members of the community, such as
the board of a managed care plan.
We argue that no single model of accountability is appropriate to health
care.Instead, we advocate a stratified model of accountability in which the
professional model guides the physician-patient relationship, the political
model operates within managed care plans and other integrated health
delivery networks, and the economic and political models operate in the
relations between managed care plans and other groups such as
employers, government, and professional associations.
• Copyright ©2004 by the American College of Physicians

accountability – ความรับผิดชอบทีอ
่ ธิบายได้ =
สภาพทีบ
่ ุคคล เช่น เจ้าหน้าทีห
่ รือผ้้บริหารองค์กร สามารถทีจ
่ ะทำางานทีเ่ ขามี
สิทธิอำานาจทีจำาเป็ นและมีความรับผิดชอบ และสามารถทีจ
่ ะตอบคำาถามได้ว่า ที ่
ทำาไปเช่นนัน
้ ด้วยเหตุผลอะไร ไม่ใช่การทำาตามอำาเภอใจหรือทำาเพือ
่ ประโยชน์
ส่วนตัวและพวกพ้อง นีค
่ ือหลักสำาคัญของหนึง่ ของการบริหารจัดการทีด
่ ี หรือ
ธรรมาภิบาล (Good Governance)
ทีม
่ า วิทยากร เชียงก้ล อธิบายศัพท์สังคมศาสตร์เพือ
่ การพัฒนา – กรุงเทพฯ;
สายธาร, 2550.
Dr. Henry Holmes’s Accountability
http://www.siamhrm.com/report/article_report.php?max=488
Dr. Henry Holmes ซึง่ เป็ นชาวต่างชาติทีม
่ ีประสบการณ์อย่างชำ่าชองในเรือ
่ ง
ของ Cross Cultural Management ในประเทศไทย ได้พ้ดถึงเรือ
่ ง
Accountability ในค่้มือวิชา Cross Cultural Management ของเขาไว้อย่างน่า

5
สนใจ ผมเองได้มีโอกาสร่วมสอนให้กับบริษัท Cross Cultural Management
หลายครัง้ จึงของนำาบทความทีเ่ ขาเขียนไว้ในเรือ
่ งนีม
้ าให้ท่านอ่านประกอบดังนี ้
คำาอืน
่ ทีม
่ ีความหมายใกล้เคียง ความรับผิดชอบ ความ
พร้อมรับผิด การยึดถือได้ เชือ
่ มัน
่ ได้ คำามัน
่ สัญญา และการไม่ปัดความผิดให้
พ้นจากตน
ACCOUNTABILITY เป็ นกลุ่มพฤติกรรมทีบ
่ ุคคลแสดงออกให้เห็นว่า
ได้ยอมรับหน้าทีห
่ นึง่ ๆ และนำาไปปฏิบัติด้วยความพร้อมทีจ
่ ะรับทัง้ ผิดและชอบ
บุคคลทีไ่ ด้รับมอบหมายงานหนึง่ ๆจะถ้กกำาหนดให้มีความรับผิดชอบ
เป็ นกรณีๆไป โดยมีการกำาหนดขอบข่ายหน้าทีอ
่ ย่างชัดเจน และ เมือ
่ บุคคลนัน

รับปากยินยอมตามทีต
่ กลง ในวัฒนธรรมตะวันตกจะหมายความว่า เป็ นการให้
คำามัน
่ สัญญาออกไป หลังจากนัน
้ เมือ
่ ถึงคราวปฏิบัติ ผลงานจะออกมาดีหรือไม่
อย่างไร จะอย่้ในขอบข่ายความรับผิดชอบของบุคคลนัน
้ ๆทัง้ หมด ชาวอังกฤษ
เรียกความรับผิดชอบนีว้ ่า “Stewardship” ซึง่ หมายถึงความรับผิดชอบการ
ปฏิบัติงานตลอดจนผลลัพธ์ของงานทีบ
่ ุคคลนัน
้ ได้รับมอบหมาย และได้มอบ
หมายให้ผ้อืน
่ กระทำาด้วย
Accountability ครอบคลุมถึงการสือ
่ ความ (Communication) บุคคล
นัน
้ ๆจะต้องร้้จักความรับผิดชอบและสำานึกเสมอว่าจะต้องรายงาน หรือบอก
กล่าวสิง่ ทีเ่ กิดขึน
้ ให้ผ้บังคับบัญชาทราบตัง้ แต่เนิน
่ ๆ รายงานนัน
้ ไม่จำาเป็ นจะต้อง
เป็ นเฉพาะเรือ
่ งทีด
่ ีเสมอไป อะไรทีเ่ ป็ นปั ญหาหรือข้อผิดพลาดก็ต้องรายงานให้
ทราบด้วย Accountability ต้องการผลสะท้อนของสิง่ ทีเ่ กิดขึน
้ (feedback) เพือ

ให้เกิดประสิทธิผลในการทำางาน
Accountability ใช่ว่าจะมีความสำาคัญในเชิงธุรกิจแต่อย่างเดียว ใน
สังคมตะวันตกใช้หลักปฏิบัตินีใ้ นพฤติกรรมทางสังคมด้วย
ชาวตะวันตกทัว
่ ไปมีความร้้สึกเห็นพ้องต้องกันว่า เมือ
่ คนเราให้คำามัน

สัญญาว่าจะทำาอะไรแล้วไม่รักษาคำาพ้ด ต่อไปคำาพ้ดของบุคคลนัน
้ จะไม่มีนำา
้ หนัก
และไม่มีคุณค่า เมือ
่ เป็ นเช่นนัน
้ เขาจะได้รับความไว้เนือ
้ เชือ
่ ใจน้อยลง พร้อมทัง้
จะได้รับมอบหมายภาระรับผิดชอบทัง้ ในด้านการงานและทางสังคมน้อยลงด้วย

6
หลายประเทศในเอเชีย พฤติกรรมของความรับผิดชอบมักจะรับ
ปฏิบัติกันเป็ นกลุ่มบุคคล ซึง่ แตกต่างกับแนวความคิดของตะวันตกทีว
่ ่า
Accountability เป็ นเรือ
่ งเฉพาะบุคคล เมือ
่ บุคคลใดรับว่าจะทำางานอะไรแล้ว เขา
จะต้องด้แลงานของตนโดยตลอดจนกระทัง่ เสร็จสิน
้ รวมไปทัง้ การคอยควบคุม
ด้แลผ้้ใต้บังคับบัญชาของตนด้วย
ในการทำางานร่วมกัน หรือ เข้าสังคมร่วมกับผ้้อืน
่ ถ้าสมาชิกในสังค
มนัน
้ ๆมีพฤติกรรมความรับผิดชอบ (Accountability) เหมือนกัน จะช่วยให้สังค
มนัน
้ ๆไม่ต้องเผชิญกับความความไม่แน่นอน และความหัวเสียทีอ
่ าจเกิดขึน

คุณลักษณะของ ตัวอย่างทักษะ
Accountability

ร้้จักขอบเขตความรับผิดชอบ · พ้ดคุย ปรึกษาหัวหน้าถึงขอบข่ายงานอย่าง


ของท่าน เฉพาะเจาะจง โดยให้ระบุอย่างชัดเจนว่ามีความ
คาดหวังจะให้ทำาอะไร
· ตรวจด้ว่างานทีส
่ ัง่ มีความชัดเจน และครบ
ถ้วน

วางแผนเพือ
่ ปฏิบัติงาน · สร้างแผนปฏิบัติงานให้แน่ชัดและน่าวางใจ
· ชีแ
้ จงแผนการทำางานให้นายและล้กน้องทราบ

ให้คำามัน
่ และมีพันธะผ้กพัน · อย่าตอบรับจนกว่าจะแน่ใจว่าสามารถทำางาน
เพือ
่ ดำาเนินงานให้สำาเร็จ นัน
้ ๆให้สำาเร็จได้
· ตัง้ ใจแน่วแน่เพือ
่ ให้บรรลุความสำาเร็จ

ถ้าท่านเป็ นผ้้บังคับบัญชา · ตรวจสอบด้ว่าคำาสัง่ ทีใ่ ห้ชัดเจนหรือไม่


ท่านต้องออกคำาสัง่ ทีช
่ ัดเจนให้
แก่ล้กน้อง

แจ้งให้ผ้บังคับบัญชาทราบว่า · เข้าไปหาผ้้บังคับบัญชาโดยตรงทันที
เหตุการณ์ทีค
่ าดไม่ถึงอาจเกิด · หลีกเลีย
่ งการบอกกล่าวกะทันหัน
ขึน
้ ซึง่ อาจทำาให้งานไม่สำาเร็จ · หากมีข้อสงสัยว่าอาจจะมีสิง่ ทีไ่ ม่ดีเกิดขึน

7
ตามทีม
่ ุ่งหวัง ไม่ว่าจะเป็ นใน ต้องรายงานให้ผ้บังคับบัญชาทราบความ
แง่ปริมาณ คุณภาพ เวลา เคลือ
่ นไหว
หรือ กำาลังคน

พร้อมทีจ
่ ะรับทัง้ ผิด(เช่น คำาติ · อย่าโยนกลอง หรือ โทษล้กน้อง (อย่าโยน
เตียน)และชอบ จากผลงานที ่ ความผิดให้ล้กน้องรับแต่เพียงฝ่ ายเดียว)
ล้กน้องกระทำา

เมือ
่ มีปัญหาต้องรายงาน · ควรเตรียมคำาตอบ เช่น “ผมยังตอบไม่ได้ใน
หัวหน้าทันที พร้อมทัง้ เตรียม ตอนนี ้ แต่ผมจะหาข้อม้ลและ
ทางเลือกหรือวิธีแก้ไขสำารอง ให้คำาตอบภายในบ่าย 3 โมงนี”้
ไว้ด้วย (ให้ระลึกเสมอว่า
หัวหน้าก็ต้องบอกนายทีอ
่ ย่้
ตำาแหน่งส้งขึน
้ ไปตามลำาดับ
ด้วยเช่นกัน)

รักษากำาหนดเวลาทีไ่ ด้วางไว้
(ถ้าทำางานเสร็จก่อนเวลาได้ยิง่
ดี)

คิดล่วงหน้าและวางแผนการ · มีแผนสำารองกันเหตุฉุกเฉินเสมอเผือ
่ ไว้เมือ

รับผิดชอบในระยะยาว เกิดการผิดพลาด
(บุคคลจำาเป็ นต้องมี
Accountability ต่อสิง่ ทีจ
่ ะเกิด
ขึน
้ ในอนาคตด้วย)

ต้องรักษาคำามัน
่ สัญญาและ · ตอบรับหรือปฏิเสธการเชือ
้ เชิญโดยทันที
รักษาคำาพ้ดในการเข้าสังคม · ไปตามนัดตรงเวลาทีไ่ ด้ตกลงกันไว้ ไม่ว่าจะ
กับผ้้อืน
่ ไปรับประทานอาหารเย็น ไปตีกอล์ฟ เล่นเทนนิส
หรือ ไปซือ
้ ของ
· ถ้าในกรณีทีม
่ ีเหตุสุดวิสัย ต้องพยายามทำาทุก

8
วิถีทางเพือ
่ แจ้งให้ผ้อืน
่ ทราบล่วงหน้า

เป็ นผ้้ทีม
่ ี Accountability ต่อ · เตรียมข้อม้ลให้พร้อมก่อนเข้าประชุมทุกครัง้
การประชุม · เข้าประชุมตรงเวลา (แสดงให้เห็นว่ามี
Accountability ต่อผ้้ร่วมประชุมท่านอืน
่ )
· นำาเสนองานส่วนทีท
่ ่านเตรียมมาในทีป
่ ระชุม
· เมือ
่ ประชุมเสร็จ ต้องให้คำามัน
่ ว่าจะรับผิดชอบ
งานในส่วนทีไ่ ด้เสนอต่อทีป
่ ระชุม

Professional Accountability
Guest Author - Colleen Moore, RN
http://www.bellaonline.com/articles/art57183.asp
Professional accountability applies to everyone involved in health care.
Accountability is a legal obligation; in health care it is also an ethical and
moral responsibility. Within the realm of professional accountability, there
are many factors.
Assuming responsibility for one’s own nursing practice is the most
important. The American Nursing Association (ANA) states in its code that
the nurse will assume accountability for nursing judgment and actions.
A professional nurse has the responsibility to practice within his/her scope
of care, calling upon his/her knowledge and skills to make decisions in the
best interest of the patient.
The level of responsibility and accountability depends on professional
levels. The Charge Nurse has more responsibility then the staff nurse, the
RN has more responsibility then the LPN, and therefore their levels of
professional judgment and practice are different. Their levels of
professional accountability are not different.
Professional nursing is based on altruism, integrity, accountability and

9
social justice. Judgments and practice that are based with those ethical
values will always be in the best interest of the patient, no matter what
the professional level.
The definition of altruism: individuals have the ethical obligation to serve
others without self-interest. The nurse who comes from an altruistic place
will make decisions that are in the best interest of the patient. A patient
advocate.
Case Study:
A physician has told a patient they have metastatic pancreatic cancer and
there is no cure. The physician is a general practitioner. The patient is
devastated. Does the nurse tell the patient to accept the doctor's
diagnosis? Does the nurse tell the patient and family to insist on an
Oncology consult? The nurse has the obligation to act in the best interest
of the patient. This nurse specializes in Oncology, so using the knowledge
that she has; she makes a judgment and tells the family to insist on
getting an Oncologist on board.
The Oncologist may not offer a better outcome but he may have options
for a better quality of life. The nurse has used her integrity. She has not
given false hope but she also has not destroyed all hope.
The Oncologist suggests chemotherapy. He has explains to the patient, it
is not a cure but may extend her life. The patient has agreed to the
treatment.
The nurse has never initiated chemotherapy. Does she give it? If she
does she will have to be accountable for her actions. She uses her
judgment and seeks out the Charge Nurse(CN), who is chemotherapy
certified, to hang the medicine. If the nurse had hung the medicine, she
would have been practicing outside her scope. Her action to involve the

10
charge nurse was the most responsible decision for the patient.
Nine months later, the patient comes back into the hospital. The
chemotherapy is no longer working. The patient is in terrible pain. She is
very thin and can no longer eat.
The physician wants to put in a feeding tube.
The family tells the nurse, they are not ready to lose their mother yet.
They have talked her into consenting for a feeding tube placement.
After the family leaves, the patient tells the nurse, she is tired and always
in pain. She wants to die peacefully. The patient says her family will not
listen to her when she speaks of dying. The patient is frustrated because
she wants to spend the last moments at peace with her family.
Does the nurse discuss Hospice with the patient?
She knows the physician is not a big supporter of hospice and the family
is not ready. The obligation is to the patient. The nurse tells the patient
about Hospice. The patient tells the nurse to please get the order so she
can speak to them. She does and Hospice helps them all get to the point
where they can say good-bye. The patient can now dye with dignity. The
nurse called upon her knowledge and experience to do what was in the
best interest of the patient. Ethically and legally she made the right
decisions.
What if the patient was a pediatric patient?
The child is tired and the family is not ready to let go.
Where does the moral and ethical responsibility lie, with the patient, the
family or to both?
It now becomes more difficult.
With all the regulatory agencies involved in health care today, the nurse
needs to very careful about the decisions he/she makes. If the nurse had

11
gone directly to the child and discussed Hospice without the parents being
present, the parents could have notified the state.
If the adult oncology patient asked not to have the family involved but the
nurse ignored the patient’s wishes, it would be a Health Portability and
Accountability Act (HIPPA) violation. Nurses are at the fore front of solving
the problems which can occur due to the Health Portability and
Accountability Act.
The Joint Commission has many standards, from patient safety issues to
nursing competencies. Nurses are responsible for knowing the Patient
Safety Goals and Core Measures.
The Patient Bill of Rights affects the way we care for patients. Patients
today know their “rights” and we give them the phone number of the
state if they think these rights have been violated. This is just one of the
ways in which the future will impact nurse’s ethical and legal
responsibilities.
The nursing shortage will also impact the future of health care.
As time goes forward, there will be a developing role of non-licensed
individuals to play a bigger part in actual patient care. This will put even
more legal and ethical responsibility on the RN. He or she will be
responsible for overseeing more patients and skilled support staff, e.g.
LPN and CNA. The position of RN will be as Leader in the Health Care
Industry.
In the future, continuing education will play a big part in the development
of the professional RN. Many facilities have clinical ladders in place for
their registered nurses and offer incentives for the nurse to further their
education. The future of the professional nurse is uncertain at this time
but the possibilities are incredible.

12
Professional accountability for diabetes care in Taiwan , 08 August 2005
Fen-Yu Tseng, Mei-Shu Lai, Ci-Yong Syu, Cheng-Ching Lin
Diabetes Research and Clinical Practice
February 2006 (Vol. 71, Issue 2, Pages 192-201)
Abstract
This study examined the performance of diabetes care measures in
Taiwan and evaluated the influencing factors for professional accountability.
We analyzed the year 2001 claims data from National Health Insurance
(NHI) program in Taipei Branch. Professional accountability for diabetes
care was measured by the adherence for laboratory monitor, either from
patient- or hospital-viewpoint. Identifying the major care unit for each
patient, a multiple logistic regression model was used to further assess the
mixed effects of patient and hospital characteristics. The percentage of
patients ever received measures in the year for plasma glucose, A1C,
urinalysis, renal function test, lipid profile, liver function test, and eye
ground was 76.3, 42.7, 40.2, 59.7, 59.2, 53.2, and 16.8% respectively.
About 19.2% patients never received any one of the measures. Patients
with hypoglycemic, anti-hypertensive or anti-hyperlipidemic agents,
hospitalization, emergency service visit and frequent visits were more likely
to receive exams. Hospitals with different levels, ownerships, locales or
qualifications as diabetes care institutions presented different accountability
for diabetes care measures. After regression, counts of visits and levels of
hospitals had persistently effects on all the measures. Our analysis
revealed sub-optimal diabetes care in Taiwan and concluded the
importance of enhancing care quality from primary settings
Evaluation using the Results Accountability strategy

13
http://www.ahelp.org/Evaluation/RA.aspx
Mark Friedman wrote a book (Trying Hard is Not Good Enough) in which
he presents the Results Accountability method. It involves program
evaluation, community status assessments, and management strategies.
This page presents ideas in the book, from the www.raguide.org website,
and from the workshop workbook.
This is a Results Accountability Manual
Results Accountability Decision-Making and Budgeting Workshop Workbook,
version 1.7 (1/1/2007 word document)
This is the Results Accountability website
http://www.raguide.org
Key Concepts in the Results Accountability strategy
Accountability means being responsible to somebody for something. Mark
Friedman, who developed the Results Accountability strategy, notes that
managers are responsible for their programs; they have performance
accountability. The connection between program performance and
community well-being may not be direct; population accountability, or work
to improve the well-being of the entire community, is shared among many
individuals and programs.
Planning an evaluation using results accounutability works backward from
where you would be if you achieved your goals:
• A result or outcome is a desired condition.
• An indicator or benchmark is a way to measure progress toward a
result.
• Baselines are (a) the values of indicators at the beginning of a
program and (b) the predicted changes in the indicators that would
happen without the program.

14
• Strategies are what works to change indicator values and improve
well-being.
Turning the curve is a way to define success. It happens if, during or
after a strategy is used, an indicator's status is different than the baseline
and closer to the result.
Mark Friedman suggests that Results Accontability is only worthwhile if it
produces useful information. Results Accountability only works if it
supports action to improve the well-being of a group of people.
For all involved, the experience of using this Results Accountability should
be simple and logical; they should find that it employs common sense,
uses plain language, and involves a minimal use of paper.
Step-by-step explanations of Results Accountability components
Using the list below, click on the topic you would like to read about.
Each component has several parts, which are listed at the top of their
pages.
Population Accountability: How is a specific population in a defined
geographic area, including individuals who did and did not receive certain
services, going on the most important indicators? Responsibility for
population-level change is shared between many individuals and programs.
[Tools]
Performance Accountability: What has a program, agency or service
system accomplished for the people it has served? [Tools]
The population and performance accountability approaches use parallel
processes; they differ on their scope. Performance accountability assesses
change in the well-being of a program’s customers, while population
accountability looks at the well-being of an entire community. Generally,
the steps are:

15
• Define the focus of the assessment (decide on a target population or
customer group)
• Decide what you want to achieve (for the target population or your
customers) and select evaluation criteria (performance measures or
indicators)
• Collect data
• Check to be sure that all of your important partners are engaged
• Review results, collect and review strategies to improve progress
• Make an action plan
• Review progress
Example
Click here to see a single-indicator example of an evaluation using the
performance accountability part of this strategy. [example]
Results Accountability
Program evaluation outcome example (Results Accountability style) –
CDPHP Worksite health promotion collaboration
In early 2006, the Alaska Section of Chronic Disease Prevention and
Health Promotion (CDPHP) started working with three small businesses to
help them develop comprehensive worksite health promotion programs.
The purpose of this three-year CDPHP pilot project is to learn about
components essential to the success of small business worksite health
promotion programs. CDPHP is providing technical assistance to these
companies in conjunction and partnership with a health insurance company
that started including disease management services in its small business
health insurance plan that year.
The customers for this pilot project evaluation are the business leaders
and worksite wellness team leaders of the three companies involved in the

16
project and the health insurance company. Secondary customers include
the Centers for Disease Prevention and Control (CDC) and others
interested in worksite health promotion.
Friedman’s Results Accountability method uses a matrix to group key
evaluation questions:
Quantity (numbers) Quality (proportions)
Effort
(process HOW MUCH DID WE DO? HOW WELL DID WE DO IT?
)
Effect IS ANYONE BETTER OFF?
(product How much change or effect What proportion of the total was
) did we produce? involved in the change or effect?
Questions in BOLD are key.
How much did CDPHP do?

Before starting the pilot project, a group of CDPHP staff designed an


evaluation process which included carefully recording the numbers of
contacts and amount of time that CDPHP spent providing technical
assistance to the worksites. The worksites were asked to keep similar
records, but that proved to be very difficult for them to do, because
worksite health promotion tasks were so frequently scattered throughout a
worker’s day and/or were shared by several individuals.

The frequency of contact between the lead CDPHP pilot project staff and
the resources offered to the companies varied, depending on company
circumstances and the time that the company wellness team leaders had
available for this work. CDPHP pilot project staff spent about four hours
during quarterly visits with two worksites, reviewing progress, discussing

17
problems and supporting action on the next steps forward. The third
company had more internal resources; CDPHP had an initial three-hour
visit there, with an additional hour spent on follow-up during the
subsequent year.
How well did CDPHP do it?
Friedman recommends compiling a data development agenda as part of
the process of identifying performance measures. Drafting this sample
identified several questions that CDPHP needs to ask representatives of
the pilot project worksites, including:
• Did they get the help they needed when they needed it?
• Were CDPHP’s expectations of the worksites clear? If not, how
could they have been clarified?
• Was the technical assistance that CDPHP provided helpful?
• What did CDPHP do that was most helpful? Least?
• What do they want in the future?
Is anyone better off?
Worksite health promotion programs can operate on a number of levels,
including:
• Efforts to increase employee awareness of healthy behaviors – such
as posters, newsletters and lunch-and-learn seminars,
• Activities that encourage employees to adopt healthier behaviors –
such as sponsoring an employee team, a weight-loss challenge, and
offering fitness club memberships, and
• Changing the work environment or implementing policies that make it
easier for employees to choose healthier behaviors – such as
offering flex time to encourage employees to exercise, establishing a

18
tobacco-free workplace, and changing meeting-fare from donuts and
soft-drinks to fruits, vegetables and water.
Comprehensive worksite health promotion programs work on all three
levels. To some degree, companies need to make a greater up-front
investment in environmental or policy change than would be required for
employee awareness efforts or sponsoring healthy behavior activities.
Consequently, CDPHP decided to use the number of environmental
changes or policies adopted as a measure of the success of this pilot
project.
After one year, six environmental change policies had been adopted. Two
of the three pilot project worksites had made at least one policy change.
Comment
The pilot project generated useful information during its first year.
• The need for commitment to and investment in worksite wellness
was underscored by the significant strides made by one of the
worksites, which was already moving toward worksite health
promotion when it joined the project.
• The value of worksite wellness teams was demonstrated when the
individual responsible for health promotion at one of the worksites
spent months in the midst of a response to a company crisis.
Although health promotion activity would have been very unlikely
during the crisis, teamwork might have helped maintain progress that
had been made before the crisis happened.
• The merit of strategic planning to the implementation of worksite
health promotion programs. While business planning is central to a
company’s success, the pilot worksites needed assistance when they
applied planning strategies to employee health promotion.

19
Another important and unanticipated change during pilot project year 1
was the identification of a partner that sponsored worksite health
promotion training in September 2006 and follow-up in the community
where two of the three pilot project worksites are located. The pilot
project sites in this community have benefited from the development of a
group of small businesses that are working together and supporting one-
another in developing comprehensive worksite wellness programs.
Action Plan

Friedman’s final step in performance accountability is the statement of an


action plan.

CDPHP will continue to support the pilot project worksites, although one of
them has since changed health insurance providers. A fourth worksite
was added in year 2.
CDPHP is developing tools for small businesses to use in their worksite
health promotion programs. These tools are integral parts of current and
planned replications of the small business worksite wellness training and
follow-up meetings that started in September 2006. CDPHP is also
seeking small business worksite health promotion champions in other
Alaska communities. As these champions are identified, plans to replicate
the training and follow-up in those places will be developed.
Performance Accountability
Performance accountability highlights the impact that a program, service or
activity may have had. To work well, it must:
• Make sense to users
• Provide useful information to managers
• Focus on the most important measures of customer well-being

20
• Not waste paper or depend on heavy reports, and
• Help you move from talk to action to improve performance
Programs exist to improve the well-being of a specific group of people, or
target population. Most programs can only help a small portion of their
target population. Although program managers are accountable for
changes directly connected to their programs, they share responsibility for
the well-being of the entire target population with other programs and
resources.
The performance accountability process is presented using Mark
Friedman’s seven questions. On this page, background information is
added at three stages. The step-by-step discussion follows this path:
Background Performance accountability questions

Getting Started Who are your customers?

Introduction to performance How can you measure if your customers


accountability measures are better off?
How can you measure if your program
is delivering services well?

Select the most important How are you doing on the most
measures important of these measures?
Who are the partners that have a role
to play in doing better?
What works to do better, including no-
cost and low-cost ideas?
What do you propose to do?

21
Getting Started
Before you can answer the first question, you need to describe exactly
what you are going to evaluate. You could focus on a program, function
or part of an organization. Since this process is designed to hold
managers accountable, you might use an organizational chart as you
define the scope of your evaluation.
Who are your customers?
Your customers are the people who could be made better or worse off by
your program or activity.
If your program causes change, it will affect some people directly and
then, as their lives change, it will affect a larger group as well. For
example, if your program helps people stop smoking cigarettes, it will help
both the people who quit (your customers) and those who breathe less
second hand smoke because those people quit. Your program will have
helped both groups become healthier.
Many programs have more than one group of customers, and the grouups
may have different interests. For example, a clean indoor air coalition
may include advocates who want to reduce exposure to second-hand
smoke and advocates who want to help people with asthma reduce dust,
mold and mildew in their homes and worksites.
Performance Accountability
Background Performance accountability questions

Getting Started Who are your customers?

Introduction to performance How can you measure if your customers


accountability measures are better off?
How can you measure if your program

22
is delivering services well?

Select the most important How are you doing on the most
measures important of these measures?
Who are the partners that have a role
to play in doing better?
What works to do better, including no-
cost and low-cost ideas?
What do you propose to do?
Indroduction to Performance Accountability measures
Performance Measures are used to define your program success. In the
Results Accountability strategy, indicators are different from performance
measures, because they are used to assess the well-being of the entire
population which includes you customers as a sub-group.
• Performance measure data are collected from a group of people that
you know: your customers.
• You are more likely to be able to have access to high quality data
about your customers so you may be able to produce precise
information for your performance measures.
• An indicator may be very similar to a performance measure, but you
may not know most of the people who give you data because you
collect these data about an entire population.
• You are more likely to have not-so-good data about your community,
so you may have to depend on substitute measures and/or estimates
information for your indicators.

23
The Results Accountability method is built on four concepts: quantity,
quality, effort and effect. When made into a table, these concepts support
five questions.

Quantity (numbers) Quality

Effo
How much did we do? How well did we do it?
rt

How much change did we Did the change we caused make a


Effe
produce? difference
ct
Is anyone better off?

Of these questions, the most important is at the bottom - Is anyone better


off?
The questions on top are the easiest to answer and have the best data -
How much did we do? How well did we do it?
By itself, the least important question for an evaluation is - How much
change did we produce? The problem is that this information has no
context. You and your users need more than raw numbers to understand
if the change that was produced was important. This question is
necessary, thoug, because it gives you part of the answer to - Did the
change we caused make a difference?
As with population accountability indicators, performance measures have
two definitions:
• The lay definition is one that anyone could understand.
• The technical definition is very specific. It defines who or which
activities are counted for the measure or numerator, if it's a
percentage.&nbps; If it is a percentage, the technical definition also
defines who or which activities are counted in the denominator.

24
How can you measure if our customers are better off?
Nearly all of the measures of the effects of the program will focus on
changes in skills or knowledge, attitude, behavior or circumstances, with
the latter including changes in the environment or policy.
You could start thinking about measures using two questions:
• If your program does a really good job, how are your customers'
lives better?
• If your program does a really poor job, how would your customers'
lives be worse?

Example for measures of effect - Is anyone better off?

How much
change did Did the change we caused make a difference?
we produce?

Effect skills/knowledge measure


(what - # worksite
happe wellness
ned coalition
becaus members that
e you conduct an - % worksite wellness coalition members that use
did it?) employee employee interest survey information in their worksite
interest survey health promotion programming
(using
acquired skills
and
knowledge)

attitude measure

25
- # teens who
say they do
not want to - % of teens who report being passengers in cars
ride in a car with drunk drivers
if the driver is
drunk

opinion measure

- # of clients
who say that
- % of clients who say they were helped by the
the services
services received
they received
were helpful

behavior measure

- # of Living
Well Alaska
participants who - % of Living Well Alaska participants who report
say they know that they used a personal action plan in the previous
how to use two weeks, when called six months after the class
personal action ended
plans at the
end of a class

environment or policy measure

- # of - rate of hospital admissions for motorcycle-related


communities brain injuries (hospital admissions for motorcycle-
that require related brain injuries ٪ number of hospital admissions)
motorcyclists in these communities

26
to wear a
helmet

measures of circumstance

- # of tobacco- - % of workers reporting exposure to second hand


free worksites smoke at work

At this stage, your task is to come up with lots of measures. This is a


good opportunity to work with others to make sure you consider all of the
options. You could include program managers and executives, staff,
customers, and community leaders.
Some will be more important than others; you will decide later which
measures you will use.
How can we measure if we are delivering services well?
The measures used in this part focus on program activities and the quality
of the services provided or materials developed. Program managers and
staff may be particularly helpful in creating these lists.
Measures for - How much did we do?
These measures are counts - of your customers or the activities, materials
or services your program produced.
• When you think about your customers, ask if you need to count
them in groups, such as by age, sex or health problem.
• You can turn your activities and services into counts, too. For
example, a food pantry that collects and distributes meals could
count the pounds of food collected and/or the number of meals of
food distributed.
Measures for - How well did we do it?

27
You may already use some measures of quality because of regulations or
other program requirements.
• Are there customer service measures that are often used by
programs like yours?
• When you think aboutu your program's activities, materials or
services, how do you usually talk about their quality? Some
common strategies include timeliness, accuracy, ease-of-
understanding, or completeness.

Examples of measures of effort

How much did we do? How well did we do it?

Effort
(what did • % of Hispanic patients with
Measures using numbers (#)
you do?) diabetes receiving services in
served
Spanish
• # of people with diabetes
• % of members of the small
listed in a diabetes registry
business worksite health
• # of businesses involved in
coalition that were highly
a small business worksite
satisfied with it
health promotion coalition
• % of schools with the School
• # of schools that received
Wellness Toolkit that report
the School Wellness Toolkit
using it to write a school
wellness policy

Measures using Activities or


products • % of patients with diabetes
• # of patients with that had an annual flu shot
diabetes that had an • % of the people that started a

28
annual flu shot
Living Well Alaska class and
• # of people enrolled in
were at five or more meetings
a Living Well Alaska
of the class
class
• % of publications for customers
• # of publications
with limited reading skills
produced

At this stage, your task is to come up with lots of measures. Some will
be more important than others; you will decide later which measures you
will use.
Glossary of evaluation terms and concepts:
Evaluate: 1. To determine or fix the value, 2. To examine carefully.
(Webster's II New Riverside University Dictionary ©1998; Houghton Mifflin
Co.)
Types of evaluation:
Program evaluation has different names, depending on its purpose
and context. Different people may use different terms with similar
meanings. Formative and process evaluations have a roughly parrallel
purpose, as do summative and outcome evaluations.
Formative evaluation asseses the worth of a program while the program
activities are forming or happening. Formative evaluation focuses on the
process.
(http://www.sil.org/linguaLinks/literacy/ReferenceMaterials/GlossaryOfLiteracyT
erms/WhatIsFormativeEvaluation.htm, printed 9/19/07)
Outcome evaluation looks at impacts/benefits/changes to your clients (as a
result of your program(s) efforts) during and/or after their participation in
your programs. Outcome evaluation may examine these changes in the

29
short-, intermediate- and/or long-term. (United Way;
http://www.managementhelp.org/evaluatn/outcomes.htm, printed 9/19/07)
Process evaluation considers the extent to which your treatment service or
system is serving the people for whom it was intended, as well as
program operation and delivery. (WHO;
http://whqlibdoc.who.int/hq/2000/WHO_MSD_MSB_00.2e.pdf, printed 9/19/07)
Summative evaluation determines the worth of a program at the end of
the program activities. The focus is on the outcome.
(http://www.sil.org/linguaLinks/literacy/ReferenceMaterials/GlossaryOfLiteracyT
erms/WhatIsSummativeEvaluation.htm, printed 9/19/07)
Robert Stakes "When the cook tastes the soup, that’s formative; when the
guests taste the soup, that’s summative."
(http://jan.ucc.nau.edu/edtech/etc667/proposal/evaluation/summative_vs._form
ative.htm, printed 9/18/07)
Trochim WMK. Research Methods Knowledge Base
http://www.socialresearchmethods.net/kb/intreval.php - a nice discourse on
evaluation types and the kinds of assessment activities that fall under the
formative and summative evaluation headings.
Measure: n. ... 5. A device, as a marked tape or a graduated container,
used for measuring; 6. An act of measurement; 7. A basis of comparison:
CRITERION ....
v. 1. To determine the dimensions, quantity or capacity of.... (Webster’s II
New Riverside University Dictionary ©1988; Houghton Mifflin Co.)
Types of measures:
There are also lots of terms for the key criteria used in program
evaluation. Although there may be some variation, these concepts may

30
be used in all kinds of evaluations, regardless of the purpose or context
of the assessment.
Activity for evaluation purposes means a measurable component that is
produced by the program being assessed, such as a meeting. Products
and services may be other measurable components.
Benchmark is a comparison value used to assess progress on a certain
measure. It is often the starting point, or baseline. For example, one
Healthy Alaskans 2010 objective is to increase the proportion of adults
aged 18 or over with diabetes that have at least an annual foot
examination; its 1999 baseline is 79%.
Goal is a broad statement about what a program hopes to achieve. A
well-written goal is describes the change that will have happened because
of the program’s success. For example, one of the goals of the Alaska
Tobacco Prevention and Control Program is to eliminate exposure to
environmental tobacco smoke. (A good guide for developing goals and
objectives is:
http://www.cdc.gov/dhdsp/state_program/evaluation_guides/smart_objectives.ht
m, printed 9/19/07)
Impact describes the anticipated long-term change that happens in a
community because a program has been successful. For example, if a
program successfully increases the proportion of adults that exercise
regularly from 25% to 50%, one long term impact might be that more
children and teens exercise regularly as physically active family events
become more common. In logic models, impact describes the furthest
ripples that correspond to a program’s outputs.
Incidence is a measure used in public health. It is the number of newly
identified individuals with a certain characteristic divided by the total

31
number of people that could have had that characteristic. For example,
630 of the 10,372 babies born in Alaska in 2005 were low birthweight, for
a low birthweight incidence rate of 6.0%.
Indicator is a specific characteristic or event that is used to assess
change. Well-defined indicators have a self-evident connection to the
program objectives, measure an important aspect of program success,
focus on something that can be changed, and are easily counted. For
example, one indicator for the severity of diabetes complications is the
number of reported lower extremity amputations in a certain period of
time.
Input refers to the specific investments made by a program or agency to
generate the activities, products or services that will cause a result.
Inputs are often personnel time, financial support or raw materials. Logic
models often use inputs, activities, outputs, outcomes and impact to
organize and specify assumptions about a program.
Measure refers to the specific characteristics or events that are used to
describe a program. Some measures are indicators, but others might be
used to identify a program’s target population or the community in which it
takes place. One of the measures monitored by the AK Diabetes
Program is the percentage of adult Alaskans with diabetes who report
having had a flu shot during the previous year.
Logic models link program investment (inputs) to products (outputs,
including activities, services or materials) to anticipated change in the
target population (outcomes); models may extend to anticipated change in
the larger community (impact). Logic modeling can be a very useful tool
for clarifying assumptions about what would happen between steps, for
developing evaluation questions, and for identifying indicators. There are

32
several websites with good information on how to develop and use a logic
model ([logic model tools]).
Objectives are the stepping stones toward a goal. Well-written objectives
are very specific statements about who will be involved, how much of
what will happen and when it will be completed.
SMART objectives are specific, measureable, achieveable, relevant and
time-limited.
Generally speaking, SMART objectives make program evaluation much
easier.
(see:
http://www.cdc.gov/dhdsp/state_program/evaluation_guides/smart_objectives.ht
m, printed 9/19/07)
Outcome is an anticipated result of program activities. Most outcomes are
short- (less than six months) or intermediate-term (one to five years).
Generally, logic models use outcomes to describe anticipated changes in a
program’s target population. Long-term outcomes and impacts may refer
to changes anticipated in a similar period of time (more than five years).
Outputs are the actions taken by a program that are intended to cause a
desired change. In logic models, an output may be an activity, service,
or product, such as patient education materials.
Prevalence is another measure used in public health. It refers to the total
number of people who have a certain characteristic as a proportion of the
total population, regardless of when they started having it. For example,
731 of 2778 Alaskans said in 2005 that they had ever been told by a
doctor that they had arthritis for a prevalence of 23%. (Health risks in
Alaska among adults.

33
http://www.hss.state.ak.us/dph/chronic/hsl/brfss/pubs/BRFSS0405.pdf, p 27
(printed 9/21/07))
Product for evaluation purposes means a measurable component that is
produced by the program being assessed. One of the products of the
Alaska Arthritis Program is the Alaska Arthritis Resource Guide. Activities
and services may be other measurable components.
Qualitative data are rich in detail and description, usually in a textual or
narrative format. Examples include data from open-ended survey
questions (e.g., “How should we solve this problem?”) or collected during
conversations. (http://www.uwex.edu/ces/tobaccoeval/glossary.html (printed
9/21/07))
Quantitative data are counted and used in calculations. Examples include
data collected from yes/no questions on surveys or reports on the
numbers of people served.
Rates are used in public health to compare populations; they are important
because they use denominators. For example, the impact giving four
people an influenza vaccine is very different when the group that could
have been immunized includes 40 individuals (10%) or 200 (2%).
Sample frame is the group from which a sample is taken. All of the
characteristics used to define this group should be included in the strategy
used to select the sample.
Sampling refers to the selection process. There are many, many good
discussions about how samples might be selected, including Probability
Sampling: http://www.dsf.health.state.pa.us/health/cwp/view.asp?
a=175&q=202058

34
Service for evaluation purposes means a measurable component that is
produced by the program being assessed. Activities and products may be
other measurable components.
Target is another comparison value used to assess progress on a certain
measure.
Value is the quantity of an evaluation measure at a particular time. It
may be a percentage, a number or even a date.
Other online glossaries of evaluation terms:
Local Program Evaluation in Tobacco Control Glossary of Evaluation
Terms. University of Wisconsin Cooperative Extension Program
Development and Evaluation
http://www.uwex.edu/ces/tobaccoeval/glossary.html (printed 9/21/07)
This webpage uses straight-forward definitions and examples
The Evaluation Center. List of Evaluation Glossary Terms. Western
Michigan University. http://ec.wmich.edu/glossary/glossaryList.htm (printed
9/21/07)
There are 571 terms in this one, which seems to be intended mostly
for educators.
Program Evaluation Glossary. Environmental Protection Agency.
http://www.epa.gov/evaluate/glossary.htm (printed 9/21/07)
The reading level on this webpage is pretty high, but it may contain
terms not found on the others.
Center for Program Evaluation. Glossary. US Bureau of Justice
Assistance. http://www.ojp.usdoj.gov/BJA/evaluation/glossary/index.htm
(printed 9/21/07)

35
Quality Measurement and Accountability for Substance Abuse and Mental
Health Services in Managed Care Organizations
Levy Merrick, Elizabeth PhD, MSW; Garnick, Deborah W. ScD; Horgan,
Constance M. ScD; Hodgkin, Dominic PhD
Medical Care:
December 2002 - Volume 40 - Issue 12 - pp 1238-1248
Original Articles
Abstract
Objectives. To analyze managed care organizations' (MCOs') use of
behavioral health quality management activities using nationally
representative survey data.
Materials and Methods. The primary data source is the Brandeis Survey
on Alcohol, Drug Abuse, and Mental Health Services in MCOs. Using a
sampling strategy designed for national estimates, we surveyed 434 MCOs
in 60 market areas (response rate = 92%) regarding their commercial
products' behavioral health services in 1999. Of these, 417 MCOs reported
clinically oriented information for 752 products. We investigated the use of
four behavioral health quality management activities: patient satisfaction
surveys, clinical outcomes assessment, performance indicators, and
2
practice guidelines. χ tests and logistic regression were used to determine
effects of product type (HMO, PPO, point-of-service) and behavioral health
contracting arrangement (specialty contract, comprehensive contract
including general medical and behavioral health, internal provision).
Results. Three-quarters of products used patient satisfaction surveys
(70.1%), performance indicators (72.7%), and practice guidelines (73.8%)
for behavioral health. Under half (48.9%) assessed clinical outcomes. HMO
products were most likely, and PPOs least likely, to conduct activities.

36
Quality activities were significantly more common among specialty-contract
products. Logistic regression showed significant negative effects on quality
activity use for PPO and POS products compared with HMOs. For clinical
outcomes, specialty- and comprehensive-contract arrangements had
significant positive effects. There were interactions between product type
and contract arrangement.
Conclusions. Most commercial managed care products use patient
satisfaction surveys, performance indicators, and practice guidelines for
behavioral health, whereas clinical outcomes assessment is less common.
Product type and contracting arrangements significantly affect use of these
activities.

37

S-ar putea să vă placă și