Sunteți pe pagina 1din 35

What is test

• Test is a deliberate attempt by people to


aquire information about themselves or
others.
• Test serve three functions:
 They provide information that are useful for
improvements of intsruction;
 In making administrative decisions;and
 For guidance purposes.
Three basic concept in understanding what
is test
1. A Test Focuses on a particular Domain.
What is a Domain?
 A test is designd to measure a particular body
of knowlegde, skills,abilities, or performance
which are interest to the test user.
• A construct is a theoretical idea developed to
describe and to organize some aspect of
exixting knowledge.
• The name of a domain carries powerful cultural
meaning . When people use a test, they often
interpet performance in terms of the context,
meaning andcultural sensibilities they associated
with test’s name and does not mean the same things
to all people.
2. A Test is an Sample of Behavior ,Product,Answer, or
Performances from the Domain.
What is Sampling?
 A test is ample of behavior, product, or
performances from a larger domain of interest.
3. A Test is Made up of Items.
• There are two main types of items:
1) Selection types of items require the students
to select the correct or the best answer from
the given options.
2) Supply types of test item are fill-in-the blanks,
or essay types.
Qualities of Good Test Instruments
1) Objectivity
 Objectivity represents the agreement of two or
more competent judges, scores or test
administrators concerning a measurement; in
essence it is the reliabity of test scores between or
among more than one evaluator.
2) A good test should also be relatively reliable. As long
as the quality being measured has not changed, this
means that any person should get about the same
score each time they take the test.
Reliability
• Reliability and validity are two concepts that
are important for defining and measuring bias
and distortion.
• Reliability refers to the extent to which
assessments are consistent.
• Another measure of reliability is the internal
consistency of the items.
Types of Reliability
a) Test-retest Reliability is a measure of reliability
obtained by administering the same test twice over
a period of time to a group of individuals. The
scores derived from the first time and second time
the test is administered can be corrected in order to
evaluate the test for stability over time.
b) Parallel Forms Reliability is a measure of reliability
obtained by administering different versions of an
assessment tool both versions must contain items
that probe the same construct,skill, knowledge base
to the same group of individuals. The scores from
The two versions can then be correlated in order to
evaluate the consistency of results across alternate
versions.
C) Inter-rater Reliability is a measure of reliability used
to assess the degree to which different judges or
inter-rater reliability is useful because human
observers will not necessarily interpet answers the
same way; raters may disagree as to how well certain
responses or material demonstrate knowledge of the
construct or skill being assessed.
d) Internal Consistency Reliability is measure of reliability used
to evaluate the degree to which different test items of the
same construct produce similar results.
• Average Inter-item Correlation is a subtype of internal
consistency reliability. It is obtained by taking all of the items
on a test that probe the same contruct reading
comprehensions determining the correlation coefficient for
each pair items, and finally taking the average inter- item
correlation.
• Split- half Reliability is another subtype of internal
consistency reliability. The process of obtaining split- half
reliability is begun by splitting in half all items of test that are
intented to probe the same area of knowledge in order to
form two sets of items. The
Entire test is administered to a group of individuals,the
total score for each set is computed, and finally the
split half reliability is obtained by deretemining the
correlation between the two total set scores.
3) Validity for a test to be reliable, it must be valid. A
test if it measures what it purports to measure.
Types of validity
1) Face validity it ascertains that the measure appears
to be assessing the intented construct under study.
2) Construct Validity this is used to ensure that
measure is actually measure of what it is intented to
measure (construct), and not other variables.
Using panel of experts who are familiar with the
construct is a way in which this type of validity can be
assessed. The experts can examine the item and
decide what that specific item is intented to measure
student can be involved in this process to obtain
their feedback.
what is a consruct?
 Consruct s are attributes that exist in the theoretical
sense. Thus , they do not exist in either the literal or
physical sense. Despite this, we can observed and
measure behaviors that provide evidence of these
constructs.
Three steps referred to as construct explication
outlines the process of defining a construct.
1) Identify the behaviors that relate to the construct.
The more you can generate the better able you are
to define the construct.
2) Identifiy other construct that may be related or
unrelated to the contruct being explained. This help
determine the boundaries of contruct.
3) Identify behaviors related to these similar
dossimilar construct and determine whether these
bahaviors are related to the current constuct being
measured.
Two methods of establishing a test’s construct are
convengent/divergent validation and factor analysis.
• Convergent/divergent validation. A test has
covergent validity if it has a high correlation with
another test that measures the same constuct. By
contrast, a test’s divergent validity is demonstrated
through a low correlation with a test that measures a
different construct.
• Factor analysis. Factor analysis is a complex statistical
procedure which is conducted for a variety of
purposes,one of which is to assess the construct
validity of a test or a number of tests.
Other Methods of Assessing Construct
Validity
• Item Analysis. Item analysis is used to help
“build” reliability and validity are into the test
from the start. Item analysis can ba both
quantiative and quantitative.
• Item difficulty. An item’s difficulty level is
usually measured in terms of the percentage
of examinees who answer the item correctly.
This percentage is referred to as item difficulty
index,or”p”
• Item discrimination. It refers to the degree to
which items differentiate among examinees in
terms of the characteristics being measured. This
can be measured in many ways. One method is to
correlate item responses with the total test score;
items with the highest test corellation with tha total
score are retained for the final version of the test.
Another way is a discrimination index (D).
• Development Changes are test measuring certain
constructs can be shown to have construct validity
if the scores on the test show predictable
development
Changes over time.
• Experimental Intervention, that is if a test has
construct validity,scores should change following
an experimental manipulation, in the direction
predicted by the theory underlying the construct.
3) Criterion- related Validity. This is used to predict
future or current performance it correlates test
result with another criterion of interest.
4) Formative Validity. When applied to outcomes
assessment, it is used to assess how well a
measure is able to provide
Information to help improve the program under
study
• Sampling Validity. This similar to content
validity. It ensures that the measure covers
broad range of areas within the concept under
study. Not everything can be covered, so items
need to be sampled from all of the domains.
This may need to be completed using a panel
of experts to ensure that the content area is
adequately sampled.
Ways to improve Validity
1) Make sure your goals and objectives are
clearly defined and achievable. Expectations
of students should be written down.
2) Match your assessment measure to your
goals and objectives. Additionally, have the
test reviewed by faculty of other schools to
obtain feedback from outside party.
3) Get students involved; have the students look
over the assessment for troublesome
Wording,or other difficulties.
4.)If possibl, compare your measure with other
measure, or data that may be available.
What are Non- test
 Good instruction involves observing and
analyzing student perfomance and the most
valuable asseeement activities should be
learning experiences well. The following are
examples of the non- test.
• Oral and written reports. Student research a
topic and then present either orally or in
written form.
• Teacher observation. The teacher observe
student while they work to make certain the
students understand the assigment and are on
task.
• Portfolio of student’s work. Teacher collects
samples of student’s work and saves for
determined amount of time.
• Slates or hand signals. Student use slates or
hand signals as a means of signaling answer to
the teacher.
• Games. Teacher utilizer fun activities to have
student practice and review concept.
• Projects. The student research a topic and
presnt it in a creative way.
• Debates. The student takes opposing position
on topic and defend their position.
• Checklist. The teacher will make a lst of
objectives that students need to master and
then check off the skill as the student masters it
• Cartooning. Student will use drawing to depict
situation and ideas.
• Models. The students produce a miniature
replica of a given topic.
• Notes. Students write a summary of a lesson.
• Daily assigments. The students completes
work assigned on a daily basis to be competed
at school or home.
• Panel. A group of students verbally present
information.
• Learning Centers. Student use tescher
provided activities for hands-on learning.
• Demonstration. Student present a visual
enactment of a particular skill or activity.
• Problem solving. Student follow step by step
solution of a problem.
• Discussion. Student in a group verbally
interact on a given topic.
• Organized note sheets and study guides.
Student collect information to help pass a test.
Other Non-test Instrument
1) Anecdotal Record. An anecdotal record is
written record kept in a positive tone of a child’s
progress based on milestones particular to that
child’s social, emotional, physical, aesthetic, and
cognitive development.
2) Obsrvation Checklist. An observation checklist
is a listing of specific concepts,skills, processes
or attitudes. It designed to alow the observer to
quickly record the presence or absence of
specific qualities or understanding.
• Observation Report
The following are questions and answers about
obsevation.
1. What is an observation?
 An observation is an informal visual
assessment of student learning.
2. What is an observation’s objective?
 To help the teacher see how a student is
learning in order to check on the
effectiveness of instruction, and/or to assess
student learning.
• 3. What does a good observation accomplish?
 Provides immediate feedback about student
learning.
• 4. What is good observation design?
 A rigorous observation is a structured model for
the visual assessment of every student over time
so that the student learning experience can be
carefully documented.
• 5. Do you have to observe every student?
 No, an observation can be focused on one
student, one student over time. An observation
could use a subset, or sample,
And structured observation.
7. What is a valid and reliable observation?
 Validity is established when the instrument
meaures what it is supposed to measure.
Reliability is when the instrument measures that
content, skills, or knowledge accurately across
student, classes, schools, etc.
8. What types of result do you get with observation?
 observations answer question of immediate
worth: what does the student experience look like
right now? What did one student do?
What classroom outcome did you observe?
What are differences in opinion among
students about…? What do most audience
members think abour..?
9. What is a good observation report?
 A short, concise document that both reveals
and show the most important results.
Guidelines for Practice
• Most importantly, an observer should have
sense of purpose and a question or two that
she looking to answer in the observation.
• When observing, make sure to take notes as best
you can during the session, and then flesh them
out immediately afterward or as close
immediately as you can muster. The more time
that passes, the more data byou will not recollect.
• Remember: desriptions are factual, accurate and
thorough when taking notes. Avoid judging the
partipants and instead rely on what can be seen
and known. Do not worry if you feel you are
missing something. It not possible to observe
everything.
• Remember to observe periods of informal
interaction and unplanned activity( breaks,
free time, arrivals, departures) as well as what
“ does not happen”.
• Pratice humility and non- judgement when
observing and reporting. Whenever possible,
assume no malice. If you both observe are
observed, you are more likely to be generous
in your heart, less paranoid in your head, and
more effective overall.
Obervation process
• Preparation. Prior to observe, meet with
Teaching Staff to discuss residency and goals.
Do not show up unannounced to observe a
class. Particularly if you are the supervisor or
employer of the teacher being observed, be
explicit that the goal of the observation is not
punitive or job- threatening .
• Gather as much of the demographic
information as possible.
a) Name of observer
b) Name of teacher observed
c) School/Center/Organization where class took
place
2. Sample Observation Prompts. When
observing a classroom, it can be helpful to
have list of behaviors as reference. These
prompts can be generated based on a
specific residency and are often applicable to
multiple situation.
e) Respect other students and their ideas
f) Participate in large group work
g) Participate in small group work
h) Practice what they are learning
i) Willingly volunteer for activities
j) Ask relevant questions and prompt thoughtful
discussion
k) Make connection with previous learning
l) Use theatre vocabulary
m) Produce quality outcome and complete tasks
• Following are examples of behaviors
associated with student engagement.
Depending on the objectives of the lesson, the
following list might be prioritized or
shortened.
• Student seems to:
a) Follow directions
b) Sustain fucos on task
c) Join groups to create and intergate ideas
d) Respect giving and receiving ideas
n) Participate in reflection
3) Inventory. An inventory of student’s learning
styles ca build self- esteem by helping them
discover their strenghts; learn about areas in
which they might need to make more efforts; and
appreciate the differences among themselves.
4) Portfolio. Collection of student- produced
materials provided over an extended period of
time that allows a teacher to evaluate student
growth and overall learning progress during that
period of time.

S-ar putea să vă placă și