Sunteți pe pagina 1din 150

LET Review

Professional
Education

ASSESSMENT OF LEARNING
Mr. Angelo Unay
*BEED, PNU-Manila (Cum Laude)
*PGDE-Math & English NTU-NIE, Singapore
BASIC CONCEPTS
Test
 An instrument designed to measure any quality,
ability, skill or knowledge.
 Comprised of test items of the area it is designed
to measure.
Measurement
 A process of quantifying the degree to which
someone/something possesses a given trait (i.e.
quality, characteristics or features)
 A process by which traits, characteristics and
behaviors are differentiated.
BASIC CONCEPTS
Assessment

 A process of gathering and


organizing data into an interpretable
form to have basis for decision- making.

 It is a prerequisite to evaluation. It
provides the information which enables
evaluation to take place.
BASIC CONCEPTS
Evaluation

 A process of systematic analysis of


both qualitative and quantitative data in
order to make sound judgment or
decision.
 It involves judgment about the
desirability of changes in students.
MODES OF ASSESSMENT
MODE DESCRIPTION EXAMPLES ADVANTAGES DISADVANTAGES

The objective Standardized Scoring is Preparation of


TRADITIONAL

paper-and- Tests objective instrument is


pen test which Teacher-made Administration time-consuming
usually Tests is easy Prone to
assesses low- because cheating
level thinking students can
skills take the test at
the same time
Question:
Which is an advantage of teacher-made tests
over those of standardized tests?
Teacher-made tests are:
a. highly reliable
b. better adapted to the needs of the
pupils
c. more objectively scored
d. highly valid
MODES OF ASSESSMENT

MODE DESCRIPTION EXAMPLES ADVANTAGES DISADVANTAGES


PERFORMANCE

A mode of Practical Test Preparation of Scoring tends to


assessment that Oral and the instrument is be subjective
requires actual relatively easy
Aural Tests without rubrics
demonstration Projects Administration
of skills or Measures
creation of behaviours that is time
products of cannot be consuming
learning deceived
MODES OF ASSESSMENT
MODE DESCRIPTION EXAMPLES ADVANTAGES DISADVANTAGES

A process of Working Measures Development is


gathering Portfolios student’s time-consuming
PORTFOLIO

multiple Show growth and Rating tends to


indicators of Portfolios development be subjective
student Documentary Intelligence- without rubrics
progress to Portfolios fair
support course
goals in dynamic,
ongoing and
collaborative
process
Question:
Which is the least authentic mode of
assessment?
a. Paper-and-pencil test in vocabulary
b. Oral performance to assess students’
spoken communication skills
c. Experiments in science to assess skill
in the use of scientific methods
d. Artistic production for music or art
subject
A COMPARISON OF THE FOUR EVALUATION
PROCEDURES
Placement Summative
Evaluation Evaluation
 done before instruction  done after
 determines mastery
of prerequisite
instruction
skills  certifies
 not graded mastery of the
intended
learning
outcomes

 graded

 determines the extent of what the pupils have achieved or


mastered in the objectives of the intended instruction
 determine the students’ strengths and weaknesses
 place the students in specific learning groups to facilitate
teaching and learning
 serve as a pretest for the next unit
 serve as basis in planning for a relevant instruction
A COMPARISON OF THE FOUR EVALUATION
PROCEDURES
Formative Evaluation
Diagnostic Evaluation
 reinforces successful  determine recurring or
learning persistent difficulties

 provides continuous  searches for the underlying


causes of these problems
feedback to both
that do not respond to first
students and teachers aid treatment
concerning learning
success and failures  helps formulate a plan for a
detailed remedial instruction
 not graded

 administered during instruction

 designed to formulate a plan for remedial instruction

 modify the teaching and learning process

 not graded
PRINCIPLES OF HIGH QUALITY
ASSESSMENT
1.Clarity of Learning Targets
 Clear and appropriate learning targets
include (1) what students know and can do
and (2) the criteria for judging student
performance.

2. Appropriateness of Assessment Methods


 The method of assessment to be used
should match the learning targets.
PRINCIPLES OF HIGH QUALITY
ASSESSMENT
3. Validity
 This refers to the degree to which a score-
based inference is appropriate,
reasonable, and useful.

4. Reliability
 This refers to the degree of consistency
when several items in a test measure the
same thing, and stability when the same
measures are given across time.
PRINCIPLES OF HIGH QUALITY
ASSESSMENT
5. Fairness
 Fair assessment is unbiased and provides
students with opportunities to demonstrate what
they have learned.

6. Positive Consequences
 The overall quality of assessment is enhanced
when it has a positive effect on student
motivation and study habits. For the teachers,
high-quality assessments lead to better
information and decision-making about students.
PRINCIPLES OF HIGH QUALITY
ASSESSMENT

7. Practicality and efficiency


 Assessments should consider the
teacher’s familiarity with the method, the
time required, the complexity of
administration, the ease of scoring and
interpretation, and cost.
TAXONOMY OF EDUCATIONAL
OBJECTIVES
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)
KNOWLEDGE
 Remembering of previously learned material
 Recall of a wide range of material, but all
that is required is the bringing to mind of the
appropriate information
 Represents the lowest level of learning
outcomes in the cognitive domain
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)

COMPREHENSION
 Ability to grasp the meaning of material
 Shown by translating material from one form
to another, by interpreting material, and by
estimating future trends
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)
APPLICATION
 Ability to use learned material in new and
concrete situations
 Application of rules, methods, concepts,
principles, laws, and theories
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)
ANALYSIS
 Ability to break down material into its
component parts so that its organizational
structure may be understood
 Include identification of parts, analysis of the
relationships between parts, and recognition
of the organizational principles involved
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)
SYNTHESIS
 Ability to put parts together to form a new
whole
 Stress creative behaviors, with major
emphasis on the formulation of new patterns
or structures
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)

EVALUATION
 Ability to judge the value of material for a
given purpose
 Judgments are to be based on definite
criteria [internal (organization) or external
(relevance to the purpose)]
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)

READING
K: Knows vocabulary
U: Reads with comprehension
Ap: Reads to obtain information to solve a problem
An: Analyzes text and outlines arguments
S: Integrates the main ideas across two or more passages
E: Critiques the conclusions in a text and offers alternatives
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)

MATHEMATICS
K: Knows the number system and basic operations
U: Understands math concepts and processes
Ap: Uses mathematics to solve problems
An: Shows how to solve multistep problems
S: Derives proofs
E: Critiques proofs in geometry
TAXONOMY OF EDUCATIONAL
OBJECTIVES
COGNITIVE DOMAIN
(Bloom, 1956)

SCIENCE
K: Knows terms and facts
U: Understands scientific principles
Ap: Applies principles to new situations
An: Analyzes chemical reactions
S: Conducts and reports experiments
E: Critiques scientific reports
Question:
With SMART lesson objectives in the
synthesis in mind, which one does
NOT belong to the group?
a. Formulate
b. Judge
c. Organize
d. Build
Question:
Which test item is in the highest level of
Bloom’s taxonomy of objectives?

a. Explain how a tree functions in


relation to the ecosystem.
b. Explain how trees receive nutrients.
c. Rate three different methods of
controlling tree growth.
d. List the parts of a tree.
Question:
Which behavioral term describes a
lesson outcome in the highest level
of Bloom’s taxonomy?
a. Analyze
b. Create
c. Infer
d. Evaluate
MAIN POINTS
FOR COMPA-
RISON
TYPES OF TESTS
Psychological Educational
 Aims to measure
students intelligence or
mental ability in a large
degree without reference  Aims to measure
to what the students has the result of
Purpose learned instructions and
learning (e.g.
 Measures the intangible Achievement Tests,
characteristics of an Performance Tests)
individual (e.g. Aptitude
Tests, Personality Tests,
Intelligence Tests)
MAIN POINTS
FOR
COMPARISON
TYPES OF TESTS
Survey Mastery
Covers a
Covers a broad
specific
range of objectives
Scope of objective
Content Measures
Measures general
fundamental
achievement in
skills and
certain subjects
abilities
MAIN POINTS
FOR TYPES OF TESTS
COMPARISON

Verbal Non-Verbal
Students do not
use words in
Language Words are used by
attaching meaning
Mode students in attaching
to or in responding
meaning to or
to test items (e.g.
responding to test items
graphs, numbers,
3-D subjects)
MAIN POINTS
FOR TYPES OF TESTS
COMPARISON
Standardized Informal
Constructed by a Constructed by a
professional item writer classroom teacher
Covers a broad range of
Covers a narrow
content covered in a subject
range of content
area
Construction
Various types of
Uses mainly multiple choice
items are used
Items written are screened
Teacher picks or
and the best items were
writes items as
chosen for the final
needed for the test
instrument
MAIN POINTS
FOR TYPES OF TESTS
COMPARISON
Standardized Informal
Scored
Can be scored by a
manually by the
machine
teacher
Construction
Interpretation
Interpretation of
is usually
results is usually
criterion-
norm-referenced
referenced
MAIN POINTS
FOR TYPES OF TESTS
COMPARISON
Individual Group
Mostly given orally or
This is a paper-
requires actual
and-pen test
demonstration of skill
Loss of rapport,
One-on-one situations,
Manner of insight and
thus, many opportunities
Administration for clinical observation knowledge about
each examinee
Chance to follow-up
Same amount of
examinee’s response in
time needed to
order to clarify or
gather information
comprehend it more
from one student
clearly
MAIN POINTS
FOR TYPES OF TESTS
COMPARISON

Objective Subjective
Affected by
Scorer’s personal
scorer’s personal
judgment does not
opinions, biases
affect the scoring
and judgments
Effect of
Worded that only one Several answers
Biases answer is acceptable are possible
Possible to
Little or no
disagreement on
disagreement on what
what is the correct
is the correct answer
answer
MAIN POINTS
FOR TYPES OF TESTS
COMPARISON
Power Speed
Consists of series of
Consists of items
items arranged in
approximately
ascending order of
Time Limit and equal in difficulty
difficulty
Level of
Difficulty
Measure’s
Measures student’s
student’s speed or
ability to answer more
rate and accuracy
and more difficult items
in responding
MAIN POINTS
FOR TYPES OF TESTS
COMPARISON
Selective Supply

Multiple choice, True or Short answer,



Completion, Restricted
False, Matching Type or Extended Essay

There are choices for There are no choices


the answer for the answer
Format Can be answered May require a longer
quickly time to answer
Less chance to
Prone to guessing guessing but prone to
bluffing
Time consuming to Time consuming to
construct answer and score
MAIN POINTS
FOR COMPA-
RISON
TYPES OF TESTS
Maximum Typical
Performance Performance
Determines what Determines what
Nature individuals can do individuals will do
of when performing at under natural
Assess their best conditions
ment Attitude, interest, and
Aptitude tests, personality inventories;
observation
achievement tests techniques; peer
appraisal
MAIN POINTS
FOR
COMPARISON
TYPES OF TESTS
Norm-Referenced Criterion-Referenced

Result is interpreted
Result is interpreted
by comparing one
by comparing
student’s
student’s performance
performance with
based on a predefined
Interpretation other students’ standard/criteria
performance

Some will really


All or none may pass
pass
Constructed by Typically constructed
trained professional by the teacher
MAIN POINTS
FOR
COMPARISON
TYPES OF TESTS
Norm-Referenced Criterion-Referenced
There is There is no
competition for a competition for a
limited percentage limited percentage of
of high scores high score

Typically covers a Typically focuses on


Interpretation large domain of a delimited domain of
learning tasks learning

Emphasizes Emphasizes
discrimination description of what
among individuals in learning tasks
terms of level of individuals can and
learning cannot perform
MAIN POINTS
FOR
COMPARISON
TYPES OF TESTS
Norm-Referenced Criterion-Referenced
Matches item
Favors items of
difficulty to learning
average difficulty
tasks, without altering
and typically omits
item difficulty or
very easy and very
omitting easy or hard
hard items
Interpretation items

Interpretation
Interpretation
requires a clearly
requires a clearly
defined and delimited
defined group
achievement domain
Similarities Between NRTs and CRTs
1. Both require specification of the
achievement domain to be measured.

2. Both require a relevant and


representative sample of test items.

3. Both use the same types of test items.


Similarities Between NRTs and CRTs
4. Both use the same rules for item writing
(except for item difficulty).

5. Both are judged by the same qualities of


goodness (validity and reliability).

6. Both are useful in educational


assessment.
Question:
A test consists of a graph showing the
relationship between age and population.
Following it is a series of true-false items
based on the graph. Which type of test does
this illustrate?
a. Laboratory exercise
b. Problem solving
c. Performance
d. Interpretive
Types of Test According to FORMAT
Selective Type – provides choices for the answer
a. Multiple Choice – consists of a stem which
describes the problem and 3 or more alternatives which
give the suggested solutions. The incorrect alternatives
are the distractors.
b. True-False or Alternative Response –
consists of declarative statement that one has to mark
true or false, right or wrong, correct or incorrect, yes or
no, fact or opinion, and the like.
c. Matching Type – consists of two parallel columns:
Column A, the column of premises from which a match is
sought; Column B, the column of responses from which
the selection is made.
Types of Test According to FORMAT
Supply Test
a. Short Answer – uses a direct question that can be
answered by a word, phrase, a number, or a symbol
b. Completion Test – it consists of an incomplete
statement

Essay Test
c. Restricted Response – limits the content of the
response by restricting the scope of the topic
d. Extended Response – allows the students to select
any factual information that they think is pertinent, to
organize their answers in accordance with their best
judgment
Question:
Which assessment tool will be most
authentic?

a. Short answer test


b. Alternate-response test
c. Essay test
d. Portfolio
Question:
Which does NOT belong to the
group?

a. Short Answer
b. Completion
c. Multiple Choice
d. Restricted-response essay
ALTERNATIVE ASSESSMENT
PERFORMANCE & AUTHENTIC ASSESSMENTS

 Specific behaviors are to be


observed

 Possibility of judging the


When To
appropriateness of students’ actions
Use
 A process or outcome cannot be
directly measured by paper-and-
pencil test
ALTERNATIVE ASSESSMENT
PERFORMANCE & AUTHENTIC ASSESSMENTS

 Allow evaluation of complex skills


which are difficult to assess using
written tests

Advantages  Positive effect on instruction and


learning

 Can be used to evaluate both the


process and the product
ALTERNATIVE ASSESSMENT
PERFORMANCE & AUTHENTIC ASSESSMENTS

 Time-consuming to administer,
develop, and score

Limitations  Subjectivity in scoring

 Inconsistencies in performance on
alternative skills
ALTERNATIVE ASSESSMENT
PORTFOLIO ASSESSMENT

CHARACTERISTICS:
1) Adaptable to individualized instructional goals
2) Focus on assessment of products
3) Identify students’ strengths rather than
weaknesses
4) Actively involve students in the evaluation
process
5) Communicate student achievement to others
6) Time-consuming
7) Need of a scoring plan to increase reliability
ALTERNATIVE ASSESSMENT
RUBRICS – scoring guides, consisting of specific
pre-established performance criteria, used in
evaluating student work on performance
assessments

Types:
1) Holistic Rubric – requires the teacher to score
the overall process or product as a whole,
without judging the component parts separately
2) Analytic Rubric – requires the teacher to score
individual components of the product or
performance first, then sums the individual
scores to obtain a total score
Types of NON-COGNITIVE TEST
1. Closed-Item or Forced-choice Instruments –
ask for one or specific answer

a. Checklist – measures students preferences,


hobbies, attitudes, feelings, beliefs, interests, etc.
by marking a set of possible responses

b. Scales – these instruments that indicate the


extent or degree of one’s response

1) Rating Scale – measures the degree or


extent of one’s attitudes, feelings, and
perception about ideas, objects and people
by marking a point along 3- or 5- point scale
Types of NON-COGNITIVE TEST
2.) Semantic Differential Scale – measures the
degree of one’s attitudes, feelings and perceptions
about ideas, objects and people by marking a point
along 5- or 7- or 11- point scale of semantic
adjectives

Ex:
Math is
easy __ __ __ __ __ __ __ difficult
important __ __ __ __ __ __ __ trivial
useful __ __ __ __ __ __ __ useless
Types of NON-COGNITIVE TEST
3) Likert Scale – measures the degree of one’s
agreement or disagreement on positive or
negative statements about objects and people
Ex:
Use the scale below to rate how much you agree or
disagree about the following statements.
5 – Strongly Agree
4 – Agree
3 – Undecided
2 – Disagree
1 – Strongly Disagree

1. Science is interesting.
2. Doing science experiments is a waste of time.
Types of NON-COGNITIVE TEST
c. Alternative Response – measures students
preferences, hobbies, attitudes, feelings, beliefs,
interests, etc. by choosing between two possible
responses
Ex:
T F 1. Reading is the best way of spending leisure time.

d. Ranking – measures students preferences or


priorities by ranking a set of responses
Ex: Rank the following subjects according to its importance.
___ Science ____ Social Studies
___ Math ____ Arts
___ English
Types of NON-COGNITIVE TEST
2. Open-Ended Instruments
– open to more than one answer
Sentence Completion – measures students preferences
over a variety of attitudes and allows students to
answer by completing an unfinished statement which
may vary in length
Surveys – measures the values held by an individual by
writing one or many responses to a given question
Essays – allows the students to reveal and clarify their
preferences, hobbies, attitudes, feelings, beliefs, and
interests by writing their reactions or opinions to a
given question
Question:
To evaluate teaching skills, which is
the most authentic tool?

a. Observation
b. Non-restricted essay test
c. Short answer test
d. Essay test
GENERAL SUGGESTIONS
IN WRITING TESTS
1. Use your test specifications as guide to
item writing.
2. Write more test items than needed.
3. Write the test items well in advance of the
testing date.
4. Write each test item so that the task to be
performed is clearly defined.
5. Write each test item in appropriate reading
level.
GENERAL SUGGESTIONS
IN WRITING TESTS
6. Write each test item so that it does not
provide help in answering other items in
the test.
7. Write each test item so that the answer is
one that would be agreed upon by experts.
8. Write test items so that it is the proper level
of difficulty.
9. Whenever a test is revised, recheck its
relevance.
SPECIFIC SUGGESTIONS
Supply Type
1. Word the item/s so that the required
answer is both brief and specific.
2. Do not take statements directly from
textbooks to use as a basis for short
answer items.
3. A direct question is generally more
desirable than an incomplete statement.
4. If the item is to be expressed in numerical
units, indicate the type of answer needed.
SPECIFIC SUGGESTIONS
Supply Type
5. Blanks should be equal in length.

6. Answers should be written before the item


number for easy checking.

7. When completion items are to be used, do


not have too many blanks. Blanks should
be at the center of the sentence and not at
the beginning.
SPECIFIC SUGGESTIONS
Selective Type
Alternative-Response
1. Avoid broad statements.
2. Avoid trivial statements.
3. Avoid the use of negative statements
especially double negatives.
4. Avoid long and complex sentences.
5. Avoid including two ideas in one sentence
unless cause and effect relationship is
being measured.
SPECIFIC SUGGESTIONS
Selective Type
Alternative-Response
6.If opinion is used, attribute it to some source
unless the ability to identify opinion is being
specifically measured.
7. True statements and false statements should be
approximately equal in length.
8. The number of true statements and false
statements should be approximately equal.
9. Start with a false statement since it is a common
observation that the first statement in this type is
always positive.
SPECIFIC SUGGESTIONS
Selective Type
Matching Type
1. Use only homogeneous materials in a
single matching exercise.
2. Include an unequal number of responses
and premises, and instruct the pupils that
response may be used once, more than
once, or not at all.
3. Keep the list of items to be matched brief,
and place the shorter responses at the
right.
SPECIFIC SUGGESTIONS
Selective Type
Matching Type

4. Arrange the list of responses in logical


order.

5. Indicate in the directions the basis for


matching the responses and premises.

6. Place all the items for one matching


exercise on the same page.
SPECIFIC SUGGESTIONS
Selective Type
Multiple Choice
1. The stem of the item should be meaningful
by itself and should present a definite
problem.
2. The item should include as much of the
item as possible and should be free of
irrelevant information.
3. Use a negatively stated item stem only
when a significant learning outcome
requires it.
SPECIFIC SUGGESTIONS
Selective Type
Multiple Choice
4. Highlight negative words in the stem for
emphasis.
5. All the alternatives should be grammatically
consistent with the stem of the item.
6. An item should only have one correct or
clearly best answer.
7. Items used to measure understanding
should contain novelty, but beware of too
much.
SPECIFIC SUGGESTIONS
Selective Type
Multiple Choice
8. All distractors should be plausible.
9. Verbal association between the stem and the
correct answer should be avoided.
10. The relative length of the alternatives should not
provide a clue to the answer.
11. The alternatives should be arranged logically.
12. The correct answer should appear in each of the
alternative positions and approximately equal
number of times but in random number.
SPECIFIC SUGGESTIONS
Selective Type
Multiple Choice
13. Use of special alternatives (e.g. None of
the above; all of the above) should be done
sparingly.
14. Do not use multiple choice items when
other types are more appropriate.
15. Always have the stem and alternatives on
the same page.
16. Break any of these rules when you have a
good reason for doing so.
Question:
In preparing a multiple-choice test,
how many options would be ideal?

a. Five
b. Three
c. Any
d. Four
SPECIFIC SUGGESTIONS
Essay Type
1. Restrict the use of essay questions to
those learning outcomes that cannot be
satisfactorily measured by objective items.
2. Formulate questions that will bring forth the
behavior specified in the learning outcome.
3. Phrase each question so that the pupils’
task is clearly defined.
4. Indicate an approximate time limit for each
question.
5. Avoid the use of optional questions.
Question:
What should a teacher do before
constructing items for a particular test?

a. Prepare the table of specifications.


b. Review the previous lessons.
c. Determine the length of time for
answering it.
d. Announce to students the scope of
the test.
CRITERIA TO CONSIDER IN
CONSTRUCTING GOOD TESTS
VALIDITY - is the degree to which a test measures
what is intended to be measured. It is the
usefulness of the test for a given purpose. It is the
most important criteria of a good examination.

FACTORS influencing the validity of tests in general


Appropriateness of test
Directions
Reading Vocabulary and Sentence Structure
Difficulty of Items
Construction of Items
Length of Test
Arrangement of Items
Patterns of Answers
WAYS of Establishing Validity
Face Validity – is done by examining the
physical appearance of the test

Content Validity – is done through a careful


and critical examination of the objectives of the
test so that it reflects the curricular objectives
WAYS of Establishing Validity
Criterion-related validity – is established statistically
such that a set of scores revealed by a test is correlated
with scores obtained in another external predictor or
measure.

Has two purposes:


a. Concurrent Validity – describes the present status of
the individual by correlating the sets of scores obtained
from two measures given concurrently

b. Predictive Validity – describes the future performance


of an individual by correlating the sets of scores obtained
from two measures given at a longer time interval
WAYS of Establishing Validity
Construct Validity – is established statistically by
comparing psychological traits or factors that influence
scores in a test, e.g. verbal, numerical, spatial, etc.

a. Convergent Validity – is established if the instrument


defines another similar trait other than what it intended
to measure (e.g. Critical Thinking Test may be
correlated with Creative Thinking Test)

b. Divergent Validity – is established if an instrument can


describe only the intended trait and not other traits (e.g.
Critical Thinking Test may not be correlated with
Reading Comprehension Test)
RELIABILITY - it refers to the consistency of
scores obtained by the same person when retested
using the same instrument or one that is parallel to
it.

FACTORS affecting Reliability


Length of the test
Difficulty of the test
Objectivity
Administrability
Scorability
Economy
Adequacy
Type of Statistical
Method Reliability Procedure
Measure Measure
Give a test twice to the same
Measure of group with any time interval
Test-Retest Pearson r
stability between sets from several
minutes to several years
Equivalent Measure of Give parallel forms of test at
Pearson r
Forms equivalence the same time between forms
Test-Retest
Measure of Give parallel forms of test
with
stability and with increased time intervals Pearson r
Equivalent
equivalence between forms
Forms
Give a test once. Score
Measure of Pearson r and
equivalent halves of the test
Split Half Internal Spearman Brown
(e.g. odd-and even numbered
Consistency Formula
items)
Give the test once, then
Measure of correlate the Kuder
Kuder-
Internal proportion/percentage of the Richardson
Richardson
Consistency students passing and not Formula 20 & 21
passing a given item
Question:
Setting up criteria for scoring essay
tests is meant to increase their:

a. Objectivity
b. Reliability
c. Validity
d. Usability
Question:
The same test is administered to different
groups at different places at different
times. This process is done in testing
the:
a. Objectivity
b. Validity
c. Reliability
d. Comprehensiveness
ITEM ANALYSIS
STEPS:
1. Score the test. Arrange from lowest to
highest.
2. Get the top 27% (T27) and below 27% (B27)
of the examinees.
3. Get the proportion of the Top and Below who
got each item correct. (PT) & (PB)
4. Compute for the Difficulty Index.
Df = (PT + PB) / N
5. Compute for the Discrimination Index.
Ds = (PT - PB) / n
ITEM ANALYSIS
INTERPRETATION
Difficulty Index (Df)
0.76 – 1.00 = easy (revise)
0.25 – 0.75 = average (accept)
0.00 – 0.24 = very difficult (reject)

Discrimination Index (Ds)


0.40 – above = very good (accept)
0.30 – 0.39 = good (accept)
0.20 – 0.29 = moderate (revise)
0.19 and below = poor (reject)
ITEM ANALYSIS
Example:

Question A B C D Df
1 0 3 24* 3
2 12* 13 3 2

# of students: 30

*To compute the Df:


Divide the number of students who choose the
correct answer by the total number of students.
ITEM ANALYSIS
Example:

Question A B C D Df
1 0 3 24* 3 0.80
2 12* 13 3 2

# of students: 30

*To compute the Df:


Divide the number of students who choose the
correct answer by the total number of students.
ITEM ANALYSIS
Example:

Question A B C D Df
1 0 3 24* 3 0.80
2 12* 13 3 2 0.40

# of students: 30

*To compute the Df:


Divide the number of students who choose the
correct answer by the total number of students.
ITEM ANALYSIS
Example:
Student Score (%) Q1 Q2 Q3
Joe 90 1 0 1
Dave 90 1 0 1
Sujie 80 0 0 1
Darrell 80 1 0 1
Eliza 70 1 0 1
Zoe 60 1 0 0
Grace 60 1 0 1
Hannah 50 1 1 0
Ricky 50 1 1 0
Anita 40 0 1 0
* “1” –corrrect; “0” - incorrect
ITEM ANALYSIS
Example:

Question PT PB Df Ds
1
2
3
ITEM ANALYSIS
Example:

Question PT PB Df Ds
1 4 4
2 0 3
3 5 1
ITEM ANALYSIS
Example:

Question PT PB Df Ds
1 4 4 0.80
2 0 3
3 5 1
ITEM ANALYSIS
Example:

Question PT PB Df Ds
1 4 4 0.80
2 0 3 0.30
3 5 1
ITEM ANALYSIS
Example:

Question PT PB Df Ds
1 4 4 0.80
2 0 3 0.30
3 5 1 0.60
ITEM ANALYSIS
Example:

Question PT PB Df Ds
1 4 4 0.80 0
2 0 3 0.30
3 5 1 0.60
ITEM ANALYSIS
Example:

Question PT PB Df Ds
1 4 4 0.80 0
2 0 3 0.30 - 0.6
3 5 1 0.60
ITEM ANALYSIS
Example:

Question PT PB Df Ds
1 4 4 0.80 0
2 0 3 0.30 - 0.6
3 5 1 0.60 0.8
1. Which question was the easiest?
2. Which question was the most difficult?
3. Which item has the poorest discrimination?
4. Which question would you eliminate (if any)?
Why?
Question:
A negative discrimination index means that:

a. More from the lower group answered


the test items correctly.
b. The items could not discriminate
between the lower and upper group.
c. More from the upper group answered
the test item correctly.
d. Less from the lower group got the test
item correctly.
Question:
A test item has a difficulty index of 0.89
and a discrimination index of 0.44. What
should the teacher do?

a. Reject the item.

b. Retain the item.

c. Make it a bonus item.

d. Make it a bonus item and reject it.


SCORING ERRORS AND BIASES
Leniency error: Faculty tends to judge better than it really is.
Generosity error: Faculty tends to use high end of scale only.
Severity error: Faculty tends to use low end of scale only.
Central tendency error:
Faculty avoids both extremes of the scale.
Bias:
Letting other factors influence score (e.g., handwriting,
typos)
Halo effect:
Letting general impression of student influence rating of
specific criteria (e.g., student’s prior work)
Contamination effect:
Judgment is influenced by irrelevant knowledge about the
student or other factors that have no bearing on
performance level (e.g., student appearance)
SCORING ERRORS AND BIASES
Similar-to-me effect:
Judging more favorably those students whom faculty see
as similar to themselves (e.g., expressing similar interests
or point of view)
First-impression effect:
Judgment is based on early opinions rather than on a
complete picture (e.g., opening paragraph)
Contrast effect:
Judging by comparing student against other students
instead of established criteria and standards
Rater drift:
Unintentionally redefining criteria and standards over time
or across a series of scorings (e.g., getting tired and
cranky and therefore more severe, getting tired and
reading more quickly/leniently to get the job done)
SCALES OF MEASUREMENT

NOMINAL

ORDINAL

RATIO

INTERVAL
frequency TYPES OF DISTRIBUTION

low scores high scores


scores
Normal Distribution
Symmetrical
Bell Curve
TYPES OF DISTRIBUTION
frequency

low scores high scores


scores
Rectangular Distribution
TYPES OF DISTRIBUTION

Unimodal Distribution

Bimodal Distribution

high scores

Multimodal / Polymodal
Distribution
TYPES OF DISTRIBUTION
frequency

high scores
low scores scores

Positively Skewed Distribution


Skewed to the Right
TYPES OF DISTRIBUTION
frequency

high scores
low scores scores

Negatively Skewed Distribution


Skewed to the Left
KURTOSIS
Leptokurtic
distributions are tall and
peaked. Because the scores
are clustered around the
mean, the standard deviation
will be smaller.

Mesokurtic
distributions are the ideal
example of the normal
distribution, somewhere
between the leptokurtic and
playtykurtic.

Platykurtic
distributions are broad
and flat.
Question:
Which statement applies when score
distribution is negatively skewed?

a. The scores are evenly distributed


from the left to the right.
b. Most pupils are underachievers.
c. Most of the scores are high.
d. Most of the scores are low.
Question:
If the scores of your test follow a
positively skewed score distribution,
what should you do? Find out _______.

a. why your items are easy


b. why most of the scores are high
c. why some pupils scored low
d. why most of the scores are low
ASSUMPTIONS
WHEN USED APPROPRIATE STATISTICAL TOOLS
MEASURES OF MEASURES OF
CENTRAL TENDENCY VARIABILITY
(describes the (describes the degree of
representative value of a spread or dispersion of a
set of data) set of data)
When the frequency Mean – the arithmetic Standard Deviation – the
distribution is regular or average root-mean-square of the
symmetrical (normal) deviations from the mean
Usually used when data
are numeric (interval or
ratio)
When the frequency Median – the middle score Quartile Deviation – the
distribution is irregular or in a group of scores that are average deviation of the 1st and
skewed ranked 3rd quartiles from the median
Usually when the data is
ordinal
When the distribution of Mode – the most frequent Range – the difference
scores is normal and quick score between the highest and the
answer is needed lowest score in the distribution
Usually used when the
data are nominal
Question:
Teacher B is researching on a family
income distribution which is quite
symmetrical. Which measure/s of
central tendency will be most
informative and appropriate?
a. Mode
b. Mean
c. Median
d. Mean and median
Question:
What measure/s of central tendency does
the number 16 represent in the following
score distribution?
14, 15, 17, 16, 19, 20, 16, 14, 16
a. Mode only
b. Median only
c. Mode and median
d. Mean and mode
INTERPRETING MEASURES OF VARIABILITY
STANDARD DEVIATION (SD)
 The result will help you determine if the group is
homogeneous or not.
 The result will also help you determine the number of
students that fall below and above the average performance.

Main points to remember:


Points above Mean + 1SD = range of above average
Mean + 1SD
= give the limits of an average ability
Mean - 1SD

Points below Mean – 1SD = range of below average


Example:
A class of 25 students was given a 75-item test. The
mean average score of the class is 61. The SD is 6.
Lisa, a student in the class, got a score of 63.
Describe the performance of Lisa.

X = 61 SD = 6 X = 63

X + SD = 61 + 6 = 67
X - SD = 61 – 6 = 55
All scores between 55-67 are average.
All scores above 67 or 68 and above are above average.
All scores below 55 or 54 and below are below average.

Therefore, Lisa’s score of 63 is average.


Question:
Zero standard deviation means that:

a. The students’ scores are the same.


b. 50% of the scores obtained is zero.
c. More than 50% of the scores
obtained is zero.
d. Less than 50% of the scores
obtained is zero.
Question:
Nellie’s score is within x ± 1 SD. To which
of the following groups does she belong?

a. Below Average

b. Average

c. Needs Improvement

d. Above Average
Question:
The score distribution of Set A and Set B have
equal mean but with different SDs. Set A has an
SD of 1.7 while Set B has an SD of 3.2. Which
statement is TRUE of the score distributions?
a. The scores of Set B has less variability than
the scores in Set A.
b. Scores in Set A are more widely scattered.
c. Majority of the scores in Set A are clustered
around the mean.
d. Majority of the scores in Set are clustered
around the mean.
INTERPRETING MEASURES OF VARIABILITY
QUARTILE DEVIATION (QD)
• The result will help you determine if the group is
homogeneous or not.
• The result will also help you determine the number of
students that fall below and above the average performance.

Main points to remember:


Points above Median + 1QD = range of above average
Median + 1QD
= give the limits of an average ability
Median – 1QD

Points below Median – 1QD = range of below average


Example:
A class of 30 students was given a 50-item test. The
median score of the class is 29. The QD is 3. Miguel,
a student in the class, got a score of 33. Describe the
performance of Miguel.
~
X = 29 QD = 3 X = 33
~
X + QD = 29 + 3 = 32
~
X - QD = 29 – 3 = 26
All scores between 26-32 are average.
All scores above 32 or 33 and above are above average.
All scores below 26 or 25 and below are below average.

Therefore, Miguel’s score of 33 is above average.


INTERPRETATION of Correlation Value
1 ----------- Perfect Positive Correlation
high positive correlation
0.5 ----------- Positive Correlation
low positive correlation
0 ----------- Zero Correlation
low negative correlation
-0.5 ----------- Negative Correlation
high negative correlation
-1 ----------- Perfect Negative Correlation

for Validity:
.81 – 1.0 = very high correlation
computed r should be at least
.61 - .80 = high correlation 0.75 to be significant
.41 - .60 = moderate correlation
for Reliability:
.21 - .40 = low correlation
computed r should be at least
0 - .20 = negligible correlation 0.85 to be significant
Question:
The computed r for scores in Math and
Science is 0.92. What does this mean?

a. Math score is positively related to


Science score.
b. Science score is slightly related to Math
score.
c. Math score is not in any way related to
Science score.
d. The higher the Math score, the lower
the Science score.
STANDARD SCORES
• Indicate the pupil’s relative position by showing
how far his raw score is above or below
average

• Express the pupil’s performance in terms of


standard unit from the mean

• Represented by the normal probability curve or


what is commonly called the normal curve

• Used to have a common unit to compare raw


scores from different tests
Corresponding Standard Scores and Percentiles
in a Normal Distribution

Z-Scores -3 -2 -1 0 +1 +2 +3

T-Scores 20 30 40 50 60 70 80

Percentiles 1 2 16 50 84 98 99.9
PERCENTILE
tells the percentage of examinees that lies below
one’s score

Example:
Jose’s score in the LET is 70 and his percentile
rank is 85.

P85 = 70 (This means Jose, who scored 70,


performed better than 85% of all the examinees
)
Z-Score
tells the number of standard deviations equivalent
to a given raw score

Formula: Where:
X – individual’s raw score
XX
Z X – mean of the normative group
SD SD – standard deviation of the
normative group

Example:
Jenny got a score of 75 in a 100-item test. The mean
score of the class is 65 and SD is 5.

Z = 75 – 65
5
=2 (Jenny is 2 standard deviations above the mean)
Example:
Mean of a group in a test: X
= 26 SD = 2

Joseph’s Score John’s Score


X = 27 X = 25

X  X 27  26 1 X  X 25  26 1
Z   Z  
SD 2 2 SD 2 2

Z = 0.5 Z = -0.5
T-Score
 refers to any set of normally distributed standard deviation
score that has a mean of 50 and a standard deviation of 10

 computed after converting raw scores to z-scores to get rid


of negative values

Formula:
T  score  50  10(Z )
Example:
Joseph’s T-score = 50 + 10(0.5)
= 50 + 5
= 55
John’s T-score = 50 + 10(-0.5)
= 50 – 5
= 45
ASSIGNING GRADES / MARKS / RATINGS

Marking or Grading is the process of assigning value to a


performance

Marks / Grades / Rating SYMBOLS:

Could be in:
1. percent such as 70%, 88% or 92%
2. letters such as A, B, C, D or F
3. numbers such as 1.0, 1.5, 2.75, 5
4. descriptive expressions such as Outstanding
(O), Very Satisfactory (VS), Satisfactory (S),
Moderately Satisfactory (MS), Needs Improvement (NI)
ASSIGNING GRADES / MARKS / RATINGS

Could represent:
1. how a student is performing in relation
to other students (norm-referenced
grading)
2. the extent to which a student has
mastered a particular body of knowledge
(criterion-referenced grading)
3. how a student is performing in relation
to a teacher’s judgment of his or her
potential
ASSIGNING GRADES / MARKS / RATINGS

Could be for:

Certification that gives assurance that a student has


mastered a specific content or achieved a certain level
of accomplishment
Selection that provides basis in identifying or grouping
students for certain educational paths or programs
Direction that provides information for diagnosis and
planning
Motivation that emphasizes specific material or skills to
be learned and helping students to understand and
improve their performance
ASSIGNING GRADES / MARKS / RATINGS

Could be assigned by using:

Criterion-Referenced Grading – or grading based on


fixed or absolute standards where grade is assigned
based on how a student has met the criteria or a well-
defined objectives of a course that were spelled out in
advance. It is then up to the student to earn the grade he
or she wants to receive regardless of how other students
in the class have performed. This is done by transmuting
test scores into marks or ratings.
ASSIGNING GRADES / MARKS / RATINGS

Norm-Referenced Grading – or grading based on


relative standards where a student’s grade reflects his
or her level of achievement relative to the performance
of other students in the class. In this system, the grade
is assigned based on the average of test scores.
Point or Percentage Grading System whereby the
teacher identifies points or percentages for various tests
and class activities depending on their importance. The total
of these points will be the bases for the grade assigned to
the student.
Contract Grading System where each student agrees to
work for a particular grade according to agreed-upon
standards.
Question:
Marking on a normative basis means that
__________.

a. the normal curve of distribution should


be followed
b. The symbols used in grading indicate
how a student achieved relative to
other students
c. Some get high marks
d. Some are expected to fail
Here is a set of scores for a class of 24 students:
Student PT Student PT
A 78 M 65
B 67 N 92
C 88 O 53
D 74 P 65
E 97 Q 83
F 84 R 79
G 57 S 45
H 65 T 95
I 81 U 62
J 58 V 74
K 70 W 85
L 81 X 76

S-ar putea să vă placă și