Sunteți pe pagina 1din 4

REGINA CLAIRE G.

ABAC, RSW
UNIT EARNER

1. In both reliability and validity analysis, correlation coefficient was used. Do a further research on
correlation coefficient and answer the following.
A. Define correlation coefficient when used as a validity coefficient

 Correlation coefficient is a statistical index used to report evidence of validity for


intended interpretations of test scores and define as the magnitude of the
correlation between test scores and a criterion variable.
 Validity tells you how useful your experimental results are; a validity coefficient
is a gauge of how strong (or weak) that “usefulness” factor is. For example, let
us say your research shows that a student with a high GPA. should perform well
on the SAT and in college. A validity coefficient can tell you more about the
strength of that relationship between test results and your criterion variables.
 Correlation coefficient is used in statistics to measure how strong a relationship
is between two variables. However, when this is used as validity coefficient, it
measures the validity of the test made by the teachers. This determines if the
scores made by the teacher relates to the scores obtained prior to the
established criterion. It also shows the validity of the test and measures what
they really aimed to measure. When the result is positive or nearly perfect
positive means that the degree of accuracy of the results between two sets of
scores are example measure of test validity.

B. Define correlation coefficient when used as a reliability coefficient

 The reliability coefficient is represented by the term rxx, the correlation of a test
with itself. Reliability coefficients are variance estimates, meaning that the
coefficients denote the amount of the score variance.
 When used as reliability coefficient refers as the consistency of the results taken
between a group of students who takes the test twice. That is to say, the
relationship among the student’s results should be similar.
 Test-Retest Reliability (sometimes called retest reliability) measures test
consistency — the reliability of a test measured over time. In other words, give
the same test twice to the same people at different times to see if the scores
are the same. For example, test on a Friday, then again, the following Friday.
The two scores are then correlated.

C. Enumerate the factors that affect correlation coefficient in both reliability and validity
analysis.

a. There are four factors affecting correlation in reliability:

1. More items of the test

2. Higher its reliability

3. Length of the test

4. Random error is one source of distortion in a test


REGINA CLAIRE G. ABAC, RSW
UNIT EARNER

b. There are nine factors that affect correlation coefficient in validity analysis

1. Arrangements of the test items

2. Level of difficulty of the test items

3. Inappropriate of the test items

4. Ambiguity

5. Directions of the test items

6. Poorly constructed test items

7. Pattern of correct answers

8. Reading vocabulary and sentences structure

9. Length of the test items

2. Suppose you are already a teacher and a fellow teacher handling the same subject told you that
he has already prepared a set of test. Discuss the effect on the validity of the test in your class if you
are to adopt these tests. Identify the particular type of validity which would greatly be affected and
discuss the reason why?

1. Face Validity-refers to the outward appearance of the test and is concerned with the
likelihood that a question will be misunderstood. My students may not be familiar to my
fellow teacher’s method of examination.

2. Content Validity- refers to the extent by which the test or assessment method
provides adequate coverage of the topic taught. The content of our discussion may not
be similar to my fellow teacher’s class, thus some topics may or may not be in the exam
which will affect its validity.

3. Construct Validity- refers to the extent to which test performance can be interpreted
in terms of one or more psychological constructs. My students and my co-teacher’s
students may have different psychological constructs which won’t be compatible.

4. Concurrent Validity- refers to the degree by which scores obtained from an


instrument relates with scores obtained from criterion such as test with known validity.
The degree of the scores that my students will get if I used my co-teacher’s exam may
not match and mess up my criterion.

5. Predictive Validity- Refers to the degree of accuracy of test as to its relationship to a


performance at some subsequent time. My fellow teacher’s target examiners is
different from mine and if I use his examination, the capability of my target may not be
measured properly.
REGINA CLAIRE G. ABAC, RSW
UNIT EARNER

3. Suppose you have administered a 50-item test in the previous school year. Now your students
took a 100-item standardized test with similar competences being assessed. Based on the available
data below, identify the type of validity that can be established. Compute for the validity coefficient
and provide interpretation.

THEREFORE:

The relationship is positive. This can be concluded that the type of test has high degree of
relationship between 50 item administered test in the previous school year and the 100-item
standardized test with similar competencies. (CONTENT VALIDITY). This also refers to predictive validity
as the test relationship performance at some subsequent time.
REGINA CLAIRE G. ABAC, RSW
UNIT EARNER

4. Compare and contrast the three ways of establishing reliability using a Venn diagram.

Test-Retest Method
Used to assess the consistency of a measure/score of two
tests from one time to another

BOTH assess a consistency.


BOTH assess a consistency of two
tests. They assess different numbers of tests (one
deals with 2 different tests while the other
They assess different consistencies ALL are used deals within 1 test only)
to assess
consistencies

Equivalent or
Parallel Forms Internal Consistency Method
Method Used to assess the consistency of results
Used to assess the consistency of the across items within a test
BOTH assess
result of two tests constructed in thethe consistency of the result
same way as the content They
domain
assess different numbers of tests
(one deals with 2 different tests while the
other deals within 1 test only)

5. Discuss how “composing a long test” helps improve the reliability of a test.

a) Longer test can produce higher reliabilities.


b) The teacher can also use a different type of test covering all the topics
c) The teacher may opt to give many options or type of test.
d) More topics can be covered

S-ar putea să vă placă și