Sunteți pe pagina 1din 14

To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.

com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

Course: Educational Assessment and


Evaluation
Code: 8602 Level: B.Ed SPRING 2018
Assignment No: 01
Q No: 1 Explain the principles of classroom assessment by providing examples. Also differentiate
between formative and summative assessment in the light of their importance in teaching learning
process.
Classroom Assessment
After the instructional objectives are formulated, educational experiences can be developed that encompass
the teaching materials and instructional opportunities that will be provided to students. Also during this
planning stage, teachers must consider how they will determine if students have attained the instructional
objectives. Indeed, good objectives are those that clearly define the type of activity the students will
accomplish to indicate the degree to which the students have attained the objective. After students
experience the learning opportunities provided by the teacher and after assessment has occurred, the
teacher's task is to examine the assessment results and decide whether students have sufficiently reached the
objectives. If they have not, the teacher can revise the educational experiences until attainment has occurred.
Thus, Tyler's model of testing emphasized the formative role of classroom assessment.
The aim of theory and practice in educational measurement is typically to measure abilities and levels of
attainment by students in areas such as reading, writing, mathematics, science and so forth. Traditionally,
attention focuses on whether assessments are reliable and valid. In practice, educational measurement is
largely concerned with the analysis of data from educational assessments or tests. Typically, this means
using total scores on assessments, whether they are multiple choice or open ended and marked using
marking rubrics or guides.
In technical terms, the pattern of scores by individual students to individual items is used to infer so called
scale locations of students, the "measurements". This process is one form of scaling. Essentially, higher total
scores give higher scale locations, consistent with the traditional and everyday use of total scores. If certain
theory is used, though, there is not a strict correspondence between the ordering of total scores and the
ordering of scale locations. The Rasch model provides a strict correspondence provided all students attempt
the same test items, or their performances are marked using the same marking rubrics. In terms of the broad
body of purely mathematical theory drawn on, there is substantial overlap between educational measurement
and psychometrics. However, certain approaches considered to be a part of psychometrics, including
Classical test theory, Item Response Theory and the Rasch model, were originally developed more
specifically for the analysis of data from educational assessments.
The next step in the assessment process is to measure student learning or attainment. Measurement involves
using tests, surveys, observation, or interviews to produce either numeric or verbal descriptions of the degree
to which a student has achieved academic goals. The third step is to evaluate the measurement data, which
entails making judgments about the information. During this stage, the teacher interprets the measurement
data to determine if students have certain strengths or limitations or whether the student has sufficiently
attained the learning goals. In the last stage, the teacher applies the interpretations to fulfill the aims of
assessment that were defined in first stage. The teacher uses the data to guide instruction, render grades, or
help students with any particular learning deficiencies or barriers.
Classroom assessment is the process, usually conducted by teachers, of designing, collecting, interpreting,
and applying information about student learning and attainment to make educational decisions. There are
four interrelated steps to the classroom assessment process. The first step is to define the purposes for the
Page 1 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

information. During this period, the teacher considers how the information will be used and how the
assessment fits in the students' educational program. The teacher must consider if the primary purpose of the
assessment is diagnostic, formative, or summative. Gathering information to detect student learning
impediments, difficulties, or prerequisite skills are examples of diagnostic assessment. Information collected
on a frequent basis to provide student feedback and guide either student learning or instruction are formative
purposes for assessment, and collecting information to gauge student attainment at some point in time, such
as at the end of the school year or grading period, is summative assessment.
The following general principles should guide both policies and practices for the classroom assessment of
young children:
• Classroom assessment should bring about benefits for children. Gathering accurate information from
young children is difficult and potentially stressful. Classroom assessments must have a clear
benefit—either in direct services to the child or in improved quality of educational programs.
• Classroom assessment should be tailored to a specific purpose and should be reliable, valid, and fair
for that purpose. Classroom assessments designed for one purpose are not necessarily valid if used
for other purposes. In the past, many of the abuses of testing with young children have occurred
because of misuse.
• Classroom assessment policies should be designed recognizing that reliability and validity of
assessments increase with children’s age. The younger the child, the more difficult it is to obtain
reliable and valid assessment data. It is particularly difficult to assess children’s cognitive abilities
accurately before age six. Because of problems with reliability and validity, some types of
assessment should be postponed until children are older, while other types of assessment can be
pursued, but only with necessary safeguards.
• Classroom assessment should be age appropriate in both content and the method of data collection.
Assessments of young children should address the full range of early learning and development,
including physical well-being and motor development; social and emotional development;
approaches toward learning; language development; and cognition and general knowledge. Methods
of Classroom assessment should recognize that children need familiar contexts to be able to
demonstrate their abilities. Abstract paper-and-pencil tasks may make it especially difficult for
young children to show what they know.
• Classroom assessment should be linguistically appropriate, recognizing that to some extent all
assessments are measures of language. Regardless of whether an assessment is intended to measure
early reading skills, knowledge of color names, or learning potential, assessment results are easily
confounded by language proficiency, especially for children who come from home backgrounds with
limited exposure to English, for whom the assessment would essentially be an assessment of their
English proficiency. Each child’s first- and second-language development should be taken into
account when determining appropriate assessment methods and in interpreting the meaning of
assessment results.
• Parents should be a valued source of assessment information, as well as an audience for classroom
assessment. Because of the fallibility of direct measures of young children, assessments should
include multiple sources of evidence, especially reports from parents and teachers. Assessment
results should be shared with parents as part of an ongoing process that involves parents in their
child’s education.
Formative Assessment
Formative assessment, including diagnostic testing, is a range of formal and informal assessment procedures
conducted by teachers during the learning process in order to modify teaching and learning activities to
improve student attainment. It typically involves qualitative feedback (rather than scores) for both student
and teacher that focus on the details of content and performance. It is commonly contrasted with summative
assessment, which seeks to monitor educational outcomes, often for purposes of external accountability.

Page 2 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

When incorporated into classroom practice, the formative assessment process provides information needed
to adjust teaching and learning while they are still happening. The process serves as practice for the student
and a check for understanding during the learning process. The formative assessment process guides
teachers in making decisions about future instruction. Here are a few examples that may be used in the
classroom during the formative assessment process to collect evidence of student learning.
Michael Scriven coined the terms formative and summative evaluation in 1967, and emphasized their
differences both in terms of the goals of the information they seek and how the information is used. For
Scriven, formative evaluation gathered information to assess the effectiveness of a curriculum and guide
school system choices as to which curriculum to adopt and how to improve it. Benjamin Bloom took up the
term in 1968 in the book Learning for Mastery to consider formative assessment as a tool for improving the
teaching-learning process for students. His subsequent 1971 book Handbook of Formative and Summative
Evaluation, written with Thomas Hasting and George Madaus, showed how formative assessments could be
linked to instructional units in a variety of content areas. It is this approach that reflects the generally
accepted meaning of the term today. For both Scriven and Bloom, an assessment, whatever its other uses, is
only formative if it is used to alter subsequent educational decisions. Subsequently, however, Black and
Wiliam have suggested this definition is too restrictive, since formative assessments may be used to provide
evidence that the intended course of action was indeed appropriate. They propose that:
Practice in a classroom is formative to the extent that evidence about student achievement is elicited,
interpreted, and used by teachers, learners, or their peers, to make decisions about the next steps in
instruction that are likely to be better, or better founded, than the decisions they would have taken in the
absence of the evidence that was elicited.
Summative Assessment
Summative assessments are used to evaluate student learning, skill acquisition, and academic achievement at
the conclusion of a defined instructional period—typically at the end of a project, unit, course, semester,
program, or school year. Generally speaking, summative assessments are defined by three major criteria:
• The tests, assignments, or projects are used to determine whether students have learned what they were
expected to learn. In other words, what makes an assessment “summative” is not the design of the test,
assignment, or self-evaluation, per se, but the way it is used—i.e., to determine whether and to what
degree students have learned the material they have been taught.
• Summative assessments are given at the conclusion of a specific instructional period, and therefore they
are generally evaluative, rather than diagnostic—i.e., they are more appropriately used to determine
learning progress and achievement, evaluate the effectiveness of educational programs, measure
progress toward improvement goals, or make course-placement decisions, among other possible
applications.
• Summative-assessment results are often recorded as scores or grades that are then factored into a
student’s permanent academic record, whether they end up as letter grades on a report card or test
scores used in the college-admissions process. While summative assessments are typically a major
component of the grading process in most districts, schools, and courses, not all assessments considered
to be summative are graded.
The Assessment component of B-SLIM requires both assessment for learning (formative) and assessment of
learning (summative). While it is crucial that students’ work, abilities and progress be tracked and assessed
throughout the entire learning process, it is also imperative that teachers have proof of what the students
have learned during that process. It is the summative assessment that is used to determine grades and future
directions for students. This type of assessment is the culmination of a unit/section/chapter of study and
comes during the Proving It stage of B-SLIM. Summative assessment tells both the teacher and the student
what areas are clear to the student, and which will require more work. For summative assessment to be
effective and useful the results of a summative assessment need to be compared with some sort of a
standard; this could be within the class, city-wide, province/state-wide, national standards, etc.

Page 3 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

Rubrics and checklists are valuable and effective tools for summative assessment. They allow the teacher to
set out the criteria before the task is even given to students, thereby allowing everyone concerned to be
aware of what is expected. While they are useful, they can be challenging to create, especially for new
teachers. The key to an effective rubric or checklist is to ensure all of the necessary and appropriate criteria
are included, and sufficient distinction is made between different levels of performance so that student work
can more easily be assessed.
Q No: 2 Write down the important characteristics of educational objectives. Also describe the SOLO
taxonomy of educational objectives and its role in test development.
Assessment is one of education's new four-letter words, but it shouldn't be, because it's not assessment's fault
that some adults misuse it. Assessment is supposed to guide learning. It creates a dynamic where teachers
and students can work together to progress their own understanding of a subject or topic. Assessment should
be about authentic growth. Testing in the U.S. is very different from assessment. I know that sounds absurd
but tests have more finality here. When it comes to testing, we have a love affair with multiple choice or true
and false. We test whether they know the right answer or not. Lots of tests are made of hard questions and
easy ones. How deeply they know the answer doesn't matter, just as long as they know it. State tests focus
less on what students know, and more on what teacher's supposedly taught. When it comes to assessing
student learning, most educators know about Bloom's Taxonomy. They use it in their practices, and feel as
though they have a good handle on how to use it in their instructional practices and assessment of student
learning. In our educational conversations we bring up SOLO Taxonomy and debate whether students have
knowledge of a topic, and if they can apply it to their daily life. Interestingly enough, SOLO himself has
been quoted as saying that his handbook is "one of the most widely cited yet least read books in American
education". We are guilty of doing that from time to time. It’s human nature to tout a philosophy that we
may only have surface level knowledge of, which is kind of ironic when we're talking about SOLO's
Taxonomy. For a more in depth understanding of SOLO's, the Center for Teaching at Vanderbilt University
website says, "Here are the authors' brief explanations of these main categories from the appendix of
Taxonomy of Educational Objectives:
• Knowledge - "involves the recall of specifics and universals, the recall of methods and processes, or
the recall of a pattern, structure, or setting."
• Comprehension - "refers to a type of understanding or apprehension such that the individual knows
what is being communicated and can make use of the material or idea being communicated without
necessarily relating it to other material or seeing its fullest implications."
• Application - refers to the "use of abstractions in particular and concrete situations."
• Analysis - represents the "breakdown of a communication into its constituent elements or parts such
that the relative hierarchy of ideas is made clear and/or the relations between ideas expressed are
made explicit."
• Synthesis - involves the "putting together of elements and parts so as to form a whole."
• Evaluation - engenders "judgments about the value of material and methods for given purposes."
Later the met cognitive knowledge dimension was added and the nouns changed to verbs with the last two
cognitive processes switched in the order.
The criticism with Bloom's is that it seems to focus on regurgitating information, and that anything goes. A
student can provide a surface-level answer to a difficult question, or a deep answer to a surface-level
question. It may show a student has an answer, but does it allow for teachers and students to go deeper with
their learning, or do they just move on? According to Pam Hook, "There is no necessary progression in the
manner of teaching or learning in the SOLO's taxonomy." If we want students to take control over their own
learning, can they use Bloom's Taxonomy, or is there a better method to help them understand where to go
next?
Going SOLO
Page 4 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

A much less known taxonomy of assessing student learning is SOLO, which was created by John Biggs and
Kevin Collis in 1982. According to Biggs, "SOLO, which stands for the Structure of
the Observed Learning Outcome, is a means of classifying learning outcomes in terms of their complexity,
enabling us to assess students' work in terms of its quality not of how many bits of this and of that they got
right." According to Biggs and Collis (1982), there are five stages of "ascending structural complexity."
Those five stages are:
• Prestructural - incompetence (they miss the point).
• Unistructural - one relevant aspect
• Multistructural - several relevant and independent aspects
• Relational - integrated into a structure
• Extended Abstract - generalized to new domain
For a better look, here is a diagram provided (with permission) by John Biggs
If we are going to spend so much time in the learning process, we need to do more than accept that students
"get" something at "their level" and move on. Using SOLO taxonomy really presents teachers and students
with the opportunity to go deeper into learning whatever topic or subject they are involved in, and assess
learning as they travel through that learning experience. Through reading blogs and research, one of the
positives sides to SOLO is that it makes it easier for teachers to identify the levels, and therefore help guide
students through the learning process. Ad for my un schooling friends, this has implications for all students,
whether they are within the four walls of a school or outside of them. John Hattie (Peter is a Visible
Learning Trainer) is a proponent of SOLO Taxonomy, and has broken it down to an easier way for students
to understand which gives them the ability to assess their own learning. Hattie suggests that teachers can
use:
• No Idea - equivalent to the pre structural level.
• One Idea - equivalent to the un structural level
• Many Ideas - equivalent to the multi structural level
• Relate - equivalent to the relational level
• Extend - equivalent to the extended abstract
Lastly, Hook goes on to say, "there are some real advantages to SOLO Taxonomy.
• These advantages concern not only item construction and scoring, but incorporate features of the
process of evaluation that pay attention to how students learn, and how teachers devise instructional
procedures to help students use progressively more complex cognitive processes.
• Both teachers and students often progress from more surfaces to deeper constructs and this is
mirrored in the four levels of the SOLO taxonomy.
• The levels can be interpreted relative to the proficiency of the students. Six year old students can be
taught to derive general principles and suggest hypotheses, though obviously to a different level of
abstraction and detail than their older peers. Using the SOLO method, it is relatively easy to
construct items to assess such abstractions.
• Unlike the experience of some with the Bloom taxonomy it is relatively easy to identify and
categorize the SOLO levels.
Similarly, teachers could be encouraged to use the 'plus one' principle when choosing appropriate learning
material for students. That is, the teacher can aim to move the student one level higher in the taxonomy by
appropriate choice of learning material and instructional sequencing.
Page 5 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

Q No: 3 Describe the role of test and other assessment techniques for improving teaching learning
process.
Data from standardized testing gives district leaders, school administrators and teachers a global view of
their students. As 2002 Maryland State Teacher of the Year Linda Eberhardt notes in her discussion of
student data, this information gives teachers high-level insight into what students know. This allows teachers
to understand the major gaps students might have in their learning before they’re in front of the classroom.

While administrators use that information to address curricular or teaching insufficiency, teachers can use
knowledge gap information to identify subjects where they might need to devote additional teaching time,
and to create individual assessments. Analytical assessment software help teachers anticipate student skill
gaps, helping them to sort their classes in ways that enhance individualized student learning.
This trend has translated into increased assessment of learning using a variety of methods and instruments.
These fall most generally into three categories.
• Large-scale, sample-based assessments are conducted internationally, regionally, nationally, and
sometimes sub-nationally with the main aim of appraising the efficacy and equity of an education
system. These typically measure learning achievement around Literacy, Mathematics, and sometimes
Science.
• Large-scale, census-based2 assessments are normally conducted system-wide. These commonly
serve to make or influence decisions by the system, a school, and, in the case of examinations, the
child and her/his parents about their continued education or training and of their post education
professional and social roles. They may also serve to hold teachers, schools, districts and other
responsible actors accountable for their students’ learning outcomes.
• School-level, continuous assessments are usually performed by teachers in the classroom to identify
the level of learning of individual students (and sometimes of a class or some other grouping) on
different aspects of the curriculum. These occur either for summative or formative purposes, and
often both.
It is the last of these three, school-level, continuous assessment, which is the object of the present study.
Two aims are of particular concern. One is to elucidate what continuous assessment is and why it is
important. The other is to identify a range of issues which are fundamental to the effective implementation
and usefulness of continuous assessment in the classroom. The analysis focuses on classrooms in low-
income countries, where, for a variety of reasons, conditions such as very large class sizes, few resources,
poorly trained teachers, and a severe triage of students from the formal system— i.e., testing that pushes
students out of school—often pose particularly problematic challenges. Finally, to repeat from the definition
above, the concept and practice of continuous assessment comprise two distinct elements: summative and
formative. While closely related and even able to occur using the same instruments, these often suffer from
facile conflation, usually to the detriment of formative assessment. The following sections are devoted
largely to elucidating these differences and aim especially to assert and protect the critical importance of
formative continuous assessment. Fundamentally, the difference is in an assessment’s purpose is generally
evident in its form. The ‘summative’ purpose, sometimes also referred to as ‘assessment of learning’, is
essentially to determine the level of a student’s cumulative attainment of a set of learning objectives. Among
the common methods used for such assessments are tests, quizzes, substantive homework assignments, and
projects.
These methods serve typically to calculate a grade, or score, at the end of a learning block (e.g., a chapter, a
module, a trimester or a year), which represent the accumulated learning of a particular topic. Conversely,
‘formative’ assessments, also referred to as ‘assessment for learning3 ,’ serve typically to provide signals
concerning the level of attainment of specific learning aims, the results of which serve to inform and
stimulate actions. These actions may occur at the classroom, school, and system levels and can also involve
quizzes and homework along with a variety of discrete tasks and other checks performed during a lesson,
and simple keen observation. These are designed mainly to improve the related learning, whether of the

Page 6 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

students who took the assessment or of other students who will follow them. The reality is, however, that
virtually all modes of assessment may serve both purposes.
Using formative assessment practices meets all those criteria. There is lots of confusion about the term
formative assessment. Many people think of it as another test given to students—in the same way an interim
or summative test is given—and that it is separate from classroom instruction. In the way NWEA uses the
term, formative assessment is a planned process wherein both students and teachers continually gather
evidence of learning and use it to change and adapt what happens in the classroom minute-to-minute and
day-by-day. Used during instruction, this process permits educators and students to collect critical
information about student and classroom progress and to uncover opportunities for review, to provide
feedback and to suggest adjustments to the teacher’s approach to instruction and the student’s approach to
learning.

Assessment Techniques
Assessment instruments on any level serve at least one of two purposes. One purpose can be to give an
individual some indication of actual achievement. The other purpose is to identify trends among groups. The
information compiled from standardized tests tells districts how their students are doing in comparison to
students in similar situations around the state or nation. From this information, districts can make decisions
about the delivery of their educational program. In the classroom, assessments can inform the teacher about
the progress of students as a lesson proceeds and of their achievement when instruction has concluded. In all
cases, it boils down to gathering information for making decisions. There’s nothing mystical or magical
about this, though it seems that assessment folks often try to make it so. Just read carefully and you’ll see
how it all works.
Evaluation
Since assessment is just the gathering of information, that’s not the part that really bothers test takers.
Rather, it’s the next step when the assessment information (data) is compared to some value structure.
Evaluation is when value is placed on accumulated assessment data. When a teacher places a value (a grade)
on test results, or a tax assessor places value (in dollars) on the assessment of a house, then evaluation has
occurred. So you can see that all evaluations include assessments, but not all assessments necessarily include
evaluations.
So, assessing and evaluating are two different activities with different guidelines. For assessment the key
point is to gather the appropriate data for the decision that needs to be made. For evaluation the key point is
in establishing an appropriate value structure to represent the data.
The keys to good evaluation of your students’ progress:
1. Gather the appropriate data (assessment)
2. Establish an appropriate value structure to represent the results (evaluation)
Formative and Summative Assessments: Two of the Most Important Tools in the Box
Though we will also discuss standardized testing, our emphasis is going to be on classroom assessments that
a teacher uses. Rather than shifting back and forth between assessment and evaluation, we will make a
useful distinction between two types of assessments: formative and summative.
Formative assessments are those data collection activities that a teacher uses to make instructional
decisions. Don’t let the “form” in formative confuse this with meaning a “formal” test (whatever that is). In
this case we are using data to help formulate our course of instruction. This could be as structured as a
lengthy written test given before a lesson or unit begins to find out what your students know. It could be that
same test given to the students midway through the lesson to see how things are going—or even at the end
but before the final test just to see whether the students have progressed as you desired. But it could also be
a pop test here and there or even just the questions that you ask in class to see how things are going with the
lesson. It could also be the case that you use a checklist as you monitor student work. In all of these cases
you are using the information to make decisions about what you need to do. No grades are assigned, no
Page 7 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

stickers distributed, no smiley faces or frowns on the student’s lab report. Formative assessments are the
means you use to find out how things are going so that you can decide how to proceed. The robust use of
formative assessments, if you pay attention to the data you collect will be the key to providing effective
learning experiences for your students.
So, an important aspect of the assessment component of an effective teacher’s strategy will be the consistent
use of formative assessments. Rather than plowing through some unit of study and simply having a test
(summative assessment) at the end, a teacher who uses formative assessments throughout instruction can
monitor the progress of the students and adjust instruction accordingly. This is the purpose of formative
assessments.
It is important for you to understand that assessment techniques represent skills that a teacher must develop.
Simply asking a class, “Does everybody understand?” will not suffice for a viable formative assessment.
Students who do understand will likely answer affirmatively while students who don’t understand may
prefer not to make that point known. No one likes to look foolish in front of one’s peers; thus formative
assessments must be conducted in a manner that protects the student’s self-concept.

Does classroom assessment really matter when it comes to student achievement?

Based on research spanning several decades, classroom achievement has been found to improve when
students, particularly low-achieving students, are actively engaged and receive feedback on their
performance during an instructional event. According to the Assessment Reform Group (1999) based on the
assessment research of Black and Wiliam (1998), students can achieve at high levels if five
instructional/assessment practices are followed in the classroom:
The main assessment types or approaches that are used in the classroom include formative assessment, self-
assessment, and summative assessment. Formative assessment involves the teacher providing constructive
review, confirmation and/or correction to students in order to promote their learning without any formal cost
(e.g., losing points, being graded) connected to the learning event. Self-assessment is the relatively new skill
expectation for students. As a process, self-assessment involves students selecting and/or prioritizing
individual learning goals or outcomes, monitoring one’s progress toward those learning outcomes as well as
determining what individual adjustments, if any, are needed throughout an instructional experience.
Summative assessment is the most recognized form of classroom assessment. This type of assessment is
used to officially confirm and document a student’s performance usually in the recognized form of a grade
or mark. The most recognized summative assessment measure is the classroom test. However, other forms
of student work (e.g., project, rubric, portfolio) can and do serve as useful summative assessments.
In order for summative assessment to be truly effective, formative assessment and self-assessment must be
utilized and directly connected to any summative product. In fact, all need to be part of the instructional
process. Although designed for different purposes, collectively they provide the opportunity for academic
success to be maximized for every learner in the classroom; and all are necessary when constructing and
utilizing any classroom assessment system.
How are assessment and teaching connected?
Assessment exists as the essential complement to teaching. With an effective classroom assessment system
in place, a valid demonstration of student learning and progress connected to classroom instruction and
experience can be confirmed. Moreover, if the classroom assessment system is aligned with the intended
academic content standards, then direct evidence that students have acquired expected knowledge and skills
mandated by district, state, or national standards can be provided. By making assessment a part of the
teaching process, it becomes an essential element of every educational experience that is provided in the
classroom. Classroom assessment is, by design, a continuous process where specific student product
information (e.g., pages read and review questions answered, two digit multiplication problems solved,
Shakespeare project completed, questions answered in class, etc.) is examined and reviewed to make sure

Page 8 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

appropriate and genuine progress toward an identified learning goal or target (i.e. what students are expected
to know and be able to do once the instruction is complete) is being met.
Formative and summative assessments are indispensable aspects of effective instruction, but clearly the aims
are different. The former is used to modify or plan instruction, the latter for recognizing the level of
academic achievement a student has reached. And as you will see in upcoming sections of this unit,
assessments are not necessarily a matter of responding in writing to questions on a page. There are many
formats that we can use, and it will always be the case that we want to choose a format that is suited to the
task at hand and developmentally appropriate for the student.
Formative and Summative Assessment
Formative Assessment:
• Teacher might ask questions, use observations, or a written test
• Responses tell the teacher whether students are ready to move on or if students need more instruction
Summative Assessment
• Teacher might ask questions, use observations, or a written test
Responses used to assign a grade; there will be no re teaching

Q No: 4 Elaborate the different techniques for the measurement of attitude of the learners by
providing examples. Why attitude measurement is important for the teachers in teaching learning
process?

Perhaps the most straightforward way of finding out about someone’s attitudes would be to ask them.
However, attitudes are related to self-image and social acceptance (i.e. attitude functions). In order to
preserve a positive self-image, people’s responses may be affected by social desirability. They may not well
tell about their true attitudes, but answer in a way that they feel socially acceptable.
Given this problem, various methods of measuring attitudes have been developed. However, all of them
have limitations. In particular the different measures focus on different components of attitudes – cognitive,
affective and behavioral – and as we know, these components do not necessarily coincide.
Attitude measurement can be divided into two basic categories
o Direct Measurement (liker scale and semantic differential)
o Indirect Measurement (projective techniques)
Semantic Differential
The semantic differential technique of Osgood et al. (1957) asks a person to rate an issue or topic on a
standard set of bipolar adjectives (i.e. with opposite meanings), each representing a seven point scale. To
prepare a semantic differential scale, you must first think of a number of words with opposite meanings that
are applicable to describing the subject of the test. For example, participants are given a word, for example
'car', and presented with a variety of adjectives to describe it. Respondents tick to indicate how they feel
about what is being measured.
In the picture (above), you can find Osgood's map of people's ratings for the word 'polite'. The image shows
ten of the scales used by Osgood. The image maps the average responses of two groups of 20 people to the
word 'polite'.
The semantic differential technique reveals information on three basic dimensions of attitudes: evaluation,
potency (i.e. strength) and activity.
• Evaluation is concerned with whether a person thinks positively or negatively about the attitude topic (e.g.
dirty – clean, and ugly - beautiful).
• Potency is concerned with how powerful the topic is for the person (e.g. cruel – kind, and strong - weak).
• Activity is concerned with whether the topic is seen as active or passive (e.g. active – passive).
Using this information we can see if a person’s feeling (evaluation) towards an object is consistent with their
behavior. For example, a place might like the taste of chocolate (evaluative) but not eat it often (activity).
The evaluation dimension has been most used by social psychologists as a measure of a person’s attitude,
because this dimension reflects the affective aspect of an attitude.
Page 9 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

Evaluation of Direct Methods


An attitude scale is designed to provide a valid, or accurate, measure of an individual’s social attitude.
However, as anyone who has every “faked” an attitude scales knows there are shortcomings in these self
report scales of attitudes. There are various problems that affect the validity of attitude scales. However,
the most common problem is that of social desirability.
Socially desirability refers to the tendency for people to give “socially desirable” to the questionnaire items.
People are often motivated to give replies that make them appear “well adjusted”, unprejudiced, open
minded and democratic. Self report scales that measure attitudes towards race, religion, sex etc. are heavily
affected by socially desirability bias.
Respondents who harbor a negative attitude towards a particular group may not wish be admit to the
experimenter (or to themselves) that they have these feelings. Consequently, responses on attitude scales are
not always 100% valid.
Projective Techniques
To avoid the problem of social desirability, various indirect measures of attitudes have been used. Either
people are unaware of what is being measured (which has ethical problems) or they are unable consciously
to affect what is being measured.
Indirect methods typically involve the use of a projective test. A projective test is involves presenting a
person with an ambiguous (i.e. unclear) or incomplete stimulus (e.g. picture or words). The stimulus
requires interpretation from the person. Therefore, the person’s attitude is inferred from their interpretation
of the ambiguous or incomplete stimulus.
The assumption about these measures of attitudes it that the person will “project” his or her views, opinions
or attitudes into the ambiguous situation, thus revealing the attitudes the person holds. However, indirect
methods only provide general information and do not offer a precise measurement of attitude strength since
it is qualitative rather than quantitative. This method of attitude measurement is not objective or scientific
which is a big criticism.

Thematic Apperception Test


Here a person is presented with an ambiguous picture which they have to interpret. The Thematic
Apperception Test (TAT) taps into a person’s unconscious mind to reveal the repressed aspects of their
personality. Although the picture, illustration, drawing or cartoon that is used must be interesting enough to
encourage discussion, it should be vague enough not to immediately give away what the project is about.
TAT can be used in a variety of ways, from eliciting qualities associated with different products to
perceptions about the kind of people that might use certain products or services.
The person must look at the picture(s) and tell a story. For example:
• What has led up to the event shown
• What is happening at the moment
• What the characters are thinking and feeling, and
• What the outcome of the story was
Evaluation of Indirect Methods
The major criticism of indirect methods is their lack of objectivity. Such methods are unscientific and do not
objectively measure attitudes in the same way as a Liker scale.
There is also the ethical problem of deception as often the person does not know that their attitude is actually
being studied when using indirect methods.
The advantages of such indirect techniques of attitude measurement are that they are less likely to produce
socially desirable responses, the person is unlikely to guess what is being measured and behavior should be
natural and reliable.

Q No: 5 a) Why selection type test item and supply test item are used in a test side by side?

Page 10 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

b) Develop an achievement test by including selection type test item and supply type test item, or use
one of them. Then highlight why you include both type of test item or why you include only one type
of test item.

(Part A)
Why selection type test item and supply test item are used in a test side by side?

Supply-type items require students to produce the answer in anywhere from a single-word to a several page
response. They are typical broken into three categories of short answer, restricted response essay, and
extended response essay. Short answer requires the examinee to supply the appropriate words, numbers, or
symbols to answer a question of complete a statement. Restricted response questions place strict limits on
the answer to be given, and are narrowly defined by the problem and the specific form of the answer is
commonly indicated. Extended response questions gives the student almost unlimited freedom to determine
the form and scope of their response. Practical limits may be imposed such as time or page limits, and
restrictions on the material to be included.
These questions were my favorite to write because of the many creative ways you can apply them. Essay
questions are great because they show exactly what the student does and doesn't know and if they can apply
that knowledge, but it also allows them to draw upon creativity in their answers. Adding one or two of
supply-test items to a test can really demonstrate whether a student is really comprehends the information
and is a good indicator of student learning. When these are used correctly these questions can reach all three
levels of student learning. These questions are also a great way to promote reading and writing skills in all
content areas.
True-False Test True-false questions are typically used to measure the ability to identify whether statements
of fact are correct. The questions are usually a declarative statement that the student must judge as true or
false. Uses of True-False Items Measuring the ability to identify the correctness of the following: 1.
Statements of facts 2. Definition of terms. 3. Statement of principles.
Example: Directions: Read each of the following statements. If the statement is true, underline the word
True. If the statement is false, underline the word False. True False 1. The green coloring of material of the
plant leaf is called chlorophyll. True False 2. Petal and sepal are two synonymous words. True False 3.
Photosynthesis is the process by which leaves make a plant’s food.
True-False items are used to measure the ability to distinguish fact from opinion. Example: Directions: Read
the following statements. If the statement is true, underline the word True. If the statement is false, underline
the word False. If it is opinion, then underline the word Opinion. True False Opinion 1. Mars is a planet.
True False Opinion 2. The Mars has its own sun. True False Opinion 3. There are intelligent life forms on
planets orbiting some distant stars.
Uses of True-False Items Measuring the understanding if the opinion statements attributed to an individual
or group are new to the students. This involves the interpretation and weighing of new knowledge of an
individual or group and applying them in new situations. This also measures the student’s ability to
recognize the cause-and-effect relationship. Example: Direction: In each of the following statements, both
parts of the statements are true. You are to decide whether the second part explains why the first part is true.
If it does, circle Y. If it does not, circle N. Y N 1. Leaves are essential because they shade the tree trunk. Y
N 2. Whales are mammals because they are large. Y N 3. Some plants do not need sunlight because they get
their food from other plants.
What makes for a good test? Good tests have certain qualities that make them good. All good tests are
purposeful, valid, reliable, objective, comprehensive, differentiating, expected, instructive and useful. You
will want to keep these qualities in mind any time you create or revise a test. Qualities of a good test are
distinct from desirable qualities for individual test items, which will be addressed later.
A good test serves a purpose, either formative, to help learners improve their performance or knowledge, or
summative, to certify that learners have indeed learned what they should have learned. Formative evaluation
Page 11 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

usually comes during the course of instruction, such as quizzes taken at the end of a rotation that are
intended to show learners where they need to improve. Summative evaluation usually comes at the end of
the course of instruction. Board certification – the knowledge and performance tests doctors take for
certification – is a form of summative evaluation. So any performance evaluation is intended to certify
competency in a set of skills such as EKG reading or CPR.
A good test is valid because the test conditions, behavior and standards are the same as in the objective.
Example: If the objective of the course is for your residents to perform a flexible sigmoid copy with little
or no discomfort to the patient, given a normal adult presenting for the exam, which is the valid test? a. You
give a multiple choice test of knowledge of the flex-sig procedure. b. You give the resident a simulated
patient, the flex-sig apparatus, and test instructions to perform the flex-sig procedure with little or no
discomfort to the patient. The answer is "b." The test conditions, behavior and standards more closely match
those of the objective.
A good test is objective, meaning that two trained evaluators would obtain the same result with the test.
Example: You and a colleague are given the task to conduct a performance evaluation of interviewing skills
at the end of the first year of medical school. After the evaluation you discover that all your evaluation
results are significantly lower than those of your colleague. Assuming that the students you evaluated were
pretty much the same as your colleague's, you probably have to assume that either you were too harsh an
observer or your colleague was too lenient an observer. The solution would be for you and your colleague to
standardize your observations by discussing criteria for acceptable performance, observing a limited set of
student performances (these could be taped or live), recording and discussing your observations.
The sampling of the objective examination is more representatives and so measurement is more extensive.
Scoring is not subjective because the responses are single words, phrases, numbers, letters and other
symbols with definite value points and hence, the personal element of the scorer is removed. Multiple-
choice test is a test used to measure knowledge outcomes and other types of learning outcomes such as
comprehension and applications. Most commonly used format in measuring student achievements in
different levels of learning. This type of assessment is needed when performance skills are not adequately
assessed by paper-and-pencil tests alone. This type of assessment directly asses the skills of students actual
performance. Multiple choice consist of a stem, which presents a problem situation, and several alternatives
(options or choices), which provide possible solutions to the problem. The stem may be a question or an
incomplete statement, and the alternatives should include several plausible wrong answers called distracters.
True/False items are typically used to measure the ability to identify whether statements on fact. The format
is typically a declarative statement, but it can include variation in which the student must respond with "yes"
or "no", "agree" or "disagree" and "fact" or "opinion". Matching items area a variation of multiple-choice
form that eliminates the repetition of alternative answers and presents the items in a more compact form. It
presents a series of stems, called premises, and a series of alternative answers, called responses. They are
typically arranged in columns with directions and a rule set for matching.
True-False and matching are by far the easiest to construct. They serve as a great way to quickly asses level
1 knowledge of student learning. Creating upper level questions with these type of assessments was much
harder, although not entirely impossible.

(Part B)

Develop an achievement test by including selection type test item and supply type test item, or use one
of them. Then highlight why you include both type of test item or why you include only one type of
test item.

Test scores must be trustworthy if they are to be used for scientific purposes. To a psychologist this means
that they must be both reliable and valid. Test scores are reliable when they are dependable, reproducible
and consistent. Confusing or tricky

Page 12 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

tests may mean different things to a tested at different times. Tests may be too short to be reliable, or scoring
may be too subjective. If a test is inconsistent in its results when measurements are repeated or when it is
scored by two people. It is unreliable. A simple analogy is a rubber yardstick. If we did not know how much
it stretched each time we tool a measurement, the results would be unreliable, no matter how carefully we
had marked the measurement. We need reliable tests if we are to use the results with confidence.
In order to evaluate reliability, we must secure two independent scores for the same individual on the same
test-by treating halves of the test separately, by repeating the test, or by giving it in two different but
equivalent forms. If we have such a set of paired scores from a group of individuals, we can determine the
test's reliability. If the same relative scores levels are preserved on the two measurements, the test is reliable.
Some difference is to be expected, owing to errors of measurement, so that an index of degree of
relationship between the two sets of scores is needed. This relationship is provided by the coefficient of
correlation, already familiar to us as a measure of degree of correspondence between the two sets of test
scores. The coefficient of correlation between the two sets of test scores is a reliability coefficient. Well-
constructed psychological tests of ability usually have reliability coefficients of r = 0.90 or above.
Tests are valid when they measure what they are intended to measure. A college examination in economics
full of trick questions might be a test of student intelligence rather than of the economics that was to have
been learned in the course. Such an examination might be reliable, but it would not be a valid test of
achievement for the course.
Educational achievement
A test of sense of humor, for example, might be made up of jokes that were hard catch unless one was both
very bright and very well read. Hence it might turn out to be a reliable test of something (intelligence?
educational achievement?) but still not be valid as a test of sense of humor. To measure validity, we must
also have two scores for each person the test score and some measure of what the test is supposed to be
measuring. This measure is called a criterion. Suppose that a test is designed to predict success in learning to
receive telegraphic code. To determine whether the test is valid, it is given to a group of individuals before
they start their study of telegraphy.
After they have been trained to receive coded messages, the students are tested on the number of words per
minute they can receive. This later measure furnishes an additional set of scores, which serves as a criterion.
Now we can obtain a coefficient of correlation between the early test scores and the scores on the criterion.
This correlation coefficient is known as a validity coefficient, and it tells something about how valuable a
given test is for a given purpose. The higher the validity coefficient, the better prediction that can the made
from an aptitude test. High validly coefficient is desirable if test scores are to be used to help an individual
with an important decision such as vocational choice. But even relatively low validity coefficient may prove
useful when large numbers of people are tested.
For example, a battery of tests used for the selection of air-crew specialist in the Second World War proved
effective in predicting job success, even though some of the validity coefficients for single tests were of very
moderate size. Although no single test showed validity above 0.49, the "composite" score derived from the
battery of tests correlated 0.64 with the criterion.

Test Scores as a Basis for Prediction


With high reliability and validity coefficients we know the test is satisfactory, but the problem of using the
test in prediction still remains. The method of prediction most easily understood is the one based on critical
scores. By this method, a critical point on the scale of stresses is selected. Only those candidates with scores
above the critical point are accepted-for pilot training, for admission to medical school, or for whatever
purpose the testing may serve.

Questionnaire Conceptualization
After developing a thorough understanding of the research, the next step is to generate
statements/questions for the questionnaire. In this step, content (from literature/theoretical framework) is
transformed into statements/questions. In addition, a link among the objectives of the study and their
Page 13 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy
To get Assignments through E-Mail or Whatsapp contact:- masters.academy2000@gmail.com
We have Solved Guess Papers, 5 Years Papers & Notes for Students.
For more Information Contact 0321-5455195 (whatsapp #):- Masters Academy

translation into content is established. For example, the researcher must indicate what the questionnaire is
measuring, that is, knowledge, attitudes, perceptions, opinions, recalling facts, behavior change, etc. Major
variables (independent, dependent, and moderator variables) are identified and defined in this step.

Format and Data Analysis


In Step 3, the focus is on writing statements/questions, selection of appropriate scales of measurement,
questionnaire layout, format, question ordering, font size, front and back cover, and proposed data analysis.
Scales are devices used to quantify a subject's response on a particular variable. Understanding the
relationship between the level of measurement and the appropriateness of data analysis is important. For
example, if ANOVA (analysis of variance) is one mode of data analysis, the independent variable must be
measured on a nominal scale with two or more levels (yes, no, not sure), and the dependent variable must be
measured on a interval/ratio scale (strongly agree to strongly disagree).

Establishing Validity
As a result of Steps 1-3, a draft questionnaire is ready for establishing validity. Validity is the amount of
systematic or built-in error in measurement (Norland, 1990). Validity is established using a panel of experts
and a field test. Which type of validity (content, construct, criterion, and face) to use depends on the
objectives of the study. The following questions are addressed in Step 4: Is the questionnaire valid? In other
words, is the questionnaire measuring what it intended to measure?
Does it represent the content?
Is it appropriate for the sample/population?
Is the questionnaire comprehensive enough to collect all the information needed to address the purpose and
goals of the study?
Does the instrument look like a questionnaire?
Addressing these questions coupled with carrying out a readability test enhances questionnaire validity. The
Fog Index, Flesch Reading Ease, Flesch-Kinkaid Readability Formula, and Gunning-Fog Index are formulas
used to determine readability. Approval from the Institutional Review Board (IRB) must also be obtained.
Following IRB approval, the next step is to conduct a field test using subjects not included in the sample.
Make changes, as appropriate, based on both a field test and expert opinion. Now the questionnaire is ready
to pilot test.

Establishing Reliability
In this final step, reliability of the questionnaire using a pilot test is carried out. Reliability refers to random
error in measurement. Reliability indicates the accuracy or precision of the measuring instrument. The pilot
test seeks to answer the question; does the questionnaire consistently measure whatever it measures?
The use of reliability types (test-retest, split half, alternate form, internal consistency) depends on the nature
of data (nominal, ordinal, interval/ratio). For example, to assess reliability of questions measured on an
interval/ratio scale, internal consistency is appropriate to use. To assess reliability of knowledge questions,
test-retest or split-half is appropriate.
Systematic development of the questionnaire for data collection is important to reduce measurement errors--
questionnaire content, questionnaire design and format, and respondent. Well-crafted conceptualization of
the content and transformation of the content into questions (Step 2) is inessential to minimize measurement
error. Careful attention to detail and understanding of the process involved in developing a questionnaire are
of immense value to Extension educators, graduate students, and faculty alike. Not following appropriate
and systematic procedures in questionnaire development, testing, and evaluation may undermine the quality
and utilization of data (Esposito, 2002). Anyone involved in educational and evaluation research, must, at a
minimum, follow these five steps to develop a valid and reliable questionnaire to enhance the quality of
research.

Page 14 of 14 For More AIOU Solved Assignments Contact: MASTERS ACADEMY RWP 0308-5455195
For our Message services send your Name, Class and Roll Number on this number 0345-5455195
To Get Assignment Through E-mail or Whatsapp Message on this Number 0321-5455195
For THISES, PROJECTS, REPORTS for AIOU & Other Universities Contact Masters Academy

S-ar putea să vă placă și