Sunteți pe pagina 1din 18

1

CHAPTER I

INTRODUCTION

A. Background

The assumption is made in this chapter that the objective of teaching spoken
languange is the development of the ability to interest successfully in that languange, and
that this involves comprehension as well as production. It is also assumed that at the
earliest stages of learning formal testing of this ability will not be called for, informal
observation providing any diagnostic information that is needed.

The basic problem in testing oral ability is essentially the same as for testing
writting. We went to set tasks that form a representative sample of the population of oral
tasks that we expect candidates to be able to perform. The tasks should elicit behaviour
which truly represents the candidates ability and which can be scored validly and
reliably.1

Language skills a person can be tested with a so called tests of language or


language testing, well testing or language or language assessment."questions about the
tests to be tested can be made by teachers themselves or others. Not all teachers
understand about the creation of good, quality matter made or things that need to be tested.
Many people do not know how to test the quality of questions that tested or will be tested.
It is also not free from their ignorance of what is language testing or language tests.
Language testing needs to be known because it would provide the basis for language
testing. Language testing is the practice and study of Evaluating the proficiency of an
individual in a particular language using Effectively. This evaluation to measure whether
students can use the language they have learned to fluently both in speaking, listening,
writing, and reading. This is also used as a gaugewhether students can receive lessons or
material submitted by the teachers well. In this paper, the author will discuss how to assess
speaking.

1
Arthur Hughes, Testing Language Teachers(new york : cambridge University Press) 1989.P.101.
2

B. Problem Formulation
1. What is assessing speaking?
2. what is the types of the assessment?
3. What the major problem in measuring speaking ability?
4. What the types of oral production test?
5. What the criteria levels of performance?
6. What The Principle Of Teaching Speaking?
7. What the Approches to Assessing Speaking?
8. How Should Assessment InstrumentsBe Selected or Designed?

C. The Purposes of the paper


1. To know about assessing speaking
2. To know about the types of the assessment
3. To know about the major promblem in measuring speaking ability
4. To know about the types of oral producction test
5. To know about the criteria levels of performance.
6. To know about the principle of teaching speaking.
7. To know about the approaches to assessing speaking.
8. To know about the assessment instruments be selected or design.
3

CHAPTER II

DISCUSSION

A. The Definition Of Assessment Speaking

Assessment is an ongoing pedagogical process that includes a number


of evaluate acts on the part of the teacher. 2Assessment of achievement is what
a student has learned in relation to a particular course content or course
objective. Formative assessment is carried out by teachers during the learning
process with the aim of using the results to improve in struction. Summative
assessment is done at the end of a course to provide information on program
to educational authorities.

Assessment is the process of gathering information to monitor progress


and make educational decisions if necessary. As noted in my definition of
test, an assessment may include a test, but also includes methods such as
observations, interviews, behaviour monitoring,etc.

Assessment has a different meaning to the evaluation. The Task Group


Assessment And Testing (TGAT) described the assessment as all the methods
used to assessthe performance of the individual or goup (Griffin and Nix,
1991 : 3 ). Popham (1995 : 3) defines assessment in the context of education
as a formal attempt to determine the status of the student regard to the
interests of education. Boyer & Ewel define of assessment as s process that
provides information about individual students, about curriculum or program,
the institution or everything related to instutional system. “processes that
provide information about individual students, about curricula or programs,
about institusions or about entire systems of institusiona” (stark & thomas,
1994:46). Based on the various descriptions of the above can concluded that

2
Dougles, brown. Teaching by principles 3 st ed (san fransisco state university)2007, p. 317.
4

the assessment or assessment can be defined as activities interpreting the data


presented. 3

According to Harmer (2007:284) speaking is the ability to speak


fluently and presupposes not only knowledge of languange features, but also
the ability to process information and language on the spot. While Nunan (in
kayi, 2006:1) defines speaking as the use of language quickly and confidently
with fe unnatural pauses, which is called as fluency. Speaking is the purposes
of building and sharing meaing through the us e of verbal and nonverbal
symbols, an a variety of contexts (chaney, 1998:13). Therefore, the author
concludes that speaking is the ability to produce the language and share
ideas. 4

B. Types of Assessment

The term assessment is generally used to refer to all activities teachers use to help
students learn and to gauge student progress. Though notion of the assessment is generally
more complicated than the following categories suggest, assessment is often divided for
the sake of convenience using the following distinctions:

1. Formative and summative


Assessment is often divided into formative and summative categoriest for the
purposes considering different objective for assessment practise.
a. Formative and Summative is generally carried out at the end of a course
or prject. In an educational setting, summative assessments are typically
used to assign students a course grade. Summative assessment are
evaluative.
b. Formative Assessment is generally carried out throughout a course or
project. Formative assessment , also reffered to as “ educative

3
Ulfah rustan, language testing,
https://www.academia.edu/29176164/MAKALAH_LANGUAGE_TESTING
4
Ani dwi wahyuni. Theorical review : speaking 2016. P.6
5

assessment”. Is used to aid learning. In an educational setting, formative


assessment might be a teacher (or pear) or the learner. Providing feedback
on a students work and would not necessarily be used for grading
purposes. Formative assessment can take the form of diagnostic,
standardized test.
2. Objective and Subjective
a. Objective assessment is a form of questioning which has a single correct
answer. Subjective assessment is a form of questioning which may have
more than one correct answer (or more than one way of expressing the
correct answer). There are various types of objective and subjective
questions. Objective question types include true / false answer, multiple
choice, multiple-response questions and essays.
b. Subjective assessment is well suited to the increasingly popular
computerized or online assessment format. Some have argued that the
distinction between objective and subjective assessment is neither useful
nor accurate because, in reality, there is no such thing as “objective”
assessment. In fact, all assessment are created with inherent biases built
into decisions about relevant subject matter and content, as well as cultural
(class, ethnic and gender) biases.
3. Informal and Formal
Assessment can be either formal or informaal. Formal assessment usually
implies a written document, such as a test, quiz or paper.
a. A formal assessment is given a numerical score or grade based on the
students performance, where as an informal assessment does not
contribute to a students final grade such as this copy and pasted discussion
question.
b. An informal assessment usually occurs in a more casual manner and may
include observation, inventories, checklists, rating scales, rubrics,
6

performance and portofolio assessments, participation, peer and self-


evaluation and discussion.5
C. The Major Problem in Measuring Speaking Ability

In earlier chapters we observed how three of the speech components gramatical structure,
vocabulary and audiotory comprehension are now being tested by reliable and relatively
simple objective techniques. It is highly probable that performance on these tests is
positively related to general ability to converse in a foreign language, although, as will be
explained directly, we still lack very reliable criteria for testing out this assumption.6
General fluency too is fairly easy to assess, at least in gross terms: it usually takes only a
few minutes of listening to determine whether a foreign speaker is able to approximate
the speed and ease with which native speakers of the language typically produce their
utterances. It is only when we come to the crucial matter of pronunciation that we are
confronted with a really serious problem of evaluation. The central reason is the lack of
general agreement on what “good” pronunciation of a second language reallu means: is
comprehensibility to be the sole basis of judgement, or do some listeners decode foreign
speaker equally comprehensible, or do some listeners decode a foreign accent with greater
facility than others? Until we can agree on precisely how speech is to be judged and have
determined that the judgments will have stability, we cannot put much confidence in oral
ratings.

All that we can offer in this chapter, then is a brief summary of the present state of a very
imperfect art. Let us hope that future research may yet transform it into a resionably exact
science.

D. Types of Oral Production Tests


Most tests of oral production fall into one of the following categories:
a. Relatively unstructured interviews, rated on a carefully constructed scale.

5
Ulfah rustan, language testing,
https://www.academia.edu/29176164/MAKALAH_LANGUAGE_TESTING
6
David P. Harris. Testing english as a second language (United States Of America : McGraw-Hill Book
Company) 1969. P. 82.
7

b. Highly structured speech samples (generally recorded), rated according to very


specific criteria.
c. Paper and pencil objective tests of pronunciation, presumably providing indirect
evidence ability.
Of the three, the rated interview is undoubtedly the most commonly used
technique, and the one with the longest history. Paper and pencil tests of
pronunciation have been used off and on for some years, generally in combination
with other types of assessment. Highly structured speech samples, as the term will
be used here, appear to be relatively recent and have not as yet won much
acceptance in american testing of english as a second language.
Scored Interview
The simplest and most frequently employed method of measuring oral proficiency
is to have one more trained rates interview each candidate separately and record
their evaluations of his competence in the spoken language. Figure 2 illustrates a
typical scale used with the interviews. It will be seen to consist of (1) a set of
qualities to be rated and (2) a series of possible ratings. The ratings have
numerical values-in this case, a range of 1 to 5 points-each followed by a short
behavioral statement. Sometimes, as a further refinement, the rated qualities are
weighted to reflect the relative importance which the test maker attaches to each
of the several speech components. Thus, for example, two or three times the
weight given to fluency might be given to grammar.
As with the other types of highly subjective measures, the great weakness
of oral ratings is their tendency to have rather low reliability. No two interviews
are conducted exactly a like, even by the same interviewer, and even though it
may be argued that some variation is desirable, or even essential, it is clear that
the test reliabilitywill be adversely affected. Similary, no interviewer can maintain
exactly the same scoring standards will lower the rater reliability of the measure.
And the above differences will be even greater when it is not possible for the same
interviewers to rate all the candidates. Nevertheless, positive steps can be taken to
achieve a tolerable degree of reliability for the scored interview. Chief among
these are (1) providing clear, precise and mutually exclusive behavioral
8

statements for each scale point, (2) training the raters for their tasks and (3)
pooling the judgments of at least two raters per interview. These and other
procedures for improving interview testing will be deat with in some detail at the
end of the chapter.

Highly Structured Speech Samples


As indicated in our brief discussion of the test interview, interviewers tend
consciously or unconsciously to set unequal tasks for the candidates and two score
the performances on different bases because of the relatively limited nature of the
guidelines provided. In an effort to minimize these weaknesses, some tests which
set highly sctuctured speaking tasks. As a rule these tests are in several parts, each
designed to elicit a somewhat different kind of speech sample. The stimuli may
be oral (provided by live voice or on tape) or written, or both. The following item
types, drawn from foreign language tests for native spekers of english, illustrate
techniques which would be equally appropriate in english tests for foreign
students.
1. Sentence Repetition.
The examinee hears, and then repeats, a series of short sentences. Scoring
procedures : the rater listens to the pronunciation of two specific pronunciation
points per sentebce, marking whether or not each is pronounced in an acceptable
way.
Sentences
1. Jack always like good food.
2. We’ll be gone for six weeks.
3. They’ve gone farther south.

Points to be rated

1. Vowel contrast in good : food


2. Vowel contrast in six : weeks
3. Voiced-voiceless fricatives in farther : south.
2. Reading Passage.
9

The examinee is given several minutes to read a passage silently, after


which he is instructed to read it aloud at normal speed and with appropriate
expression. Scoring Procedure : the rater marks two or more pronunciation points
per sentence and then makes a general evaluation of the fluency of the reading.
Examiner’s copy of the test
While Mr. Brown read his newspaper,
His wife finished packing his clothes for trip.
The suitcase was already quite full, and she was
Having a great deal of difficulty finding
Room for the shirts, socks and handkerchiefs.
Turning to her husband, she asked, “are you sure you
Really want to go on this trip?”
“im sure”, replied Mr.Brown,
“ but how about you?”

Points to be rated
Primary stress
Voiced final consonant(s)
Vowel quality
Primary stress
Series intonation
Consonant cluster
Intonation countour
Intonation countour
Stress and pitch

3. Sentence Conversion.

The examinee is instructed to covert or transform sentences in specific


ways (from positive to negative, from statement to question, from present tense to
10

past etc). The voice on the tape gives the sentences one at a time, the examinee
supplying the converion in the pause that follows.

Scoring Procedure : the rater scores each converted sentence on the basis
of whether or nnot it is grammatically acceptable.

4. Sentence Contruction

The voice on the tape asks the examinee to compose sentences appropriate
to specific situations.

Scoring Procedure : the rater scores each sentence on an acceptable-


unacceptable basis.

5. Response to Pictorial Stimuli

The examinee is given time to study each of a series of picture and then
briefly describes what is going on in each scene.

Scoring Procedure : for each picture the rater gives a separate rating of the
examinee’s pronunciation, grammar, vocabulary and fluency, using a 4 or 5 point
scale.
Oral production tests comprising the above, or similar, types of highly
structured speech tasks offer considerable promise as replacements for the
unstructured interview, for they greatly increase both test and scorer consistency.
However it must not be forgotten that the scoring still requires human judgements
and satisfactory reliability can be achieved only if the raters are carefully selected
and are put through rigorous training sessions. Moreover, such tests demend a
great deal of time and care in preparation and usually require mechanical devices
for proper administration. In short: structured speech-sample tests provide no
shortcuts to effective oral testing.

Paper and Pencil Test of Pronunciation


For some years test writes have experimented with objective paper-and-
pencil tests of pronunciation in which the subjects have merely to check responses
11

indicating how they pronounce english vowels and consonants and how they
stress words and phrases. Such tests assume (1) that what the foreign learner
“hearts” himself say silently is in fact, what he says aloud and (2) that a
sufficientlybroad range of pronynciation problelms can be tested by this indirect
method to allow us to generalize about a subject’s overal control of the english
sound system. Before discussing these assumptions, let us illustrate characteristic
item types appearing in these paper-and-pencil pronunciation tests.
1. Rhyme words

The examinee is first presented with a test word which he is instructed to


read to himself, after which he is to select the one word from among several
alternatives which rhymes with the test word.

a. Could rhymes with Blood , Food , wood.


b. Plays rhymes with case, raise, press.
2. Word stress
The examinee is to decide which syllable in each test word receives the
heaviest stress.
1. Frequently
1 2 3
2. Introduce
1 2 3
3. Develop
1 2 3

3.Phrase Stress

The examinee to decide which one of several numbered syllables in each


utterance would receive the heaviest stress.

1. I know that henry went to the movie, but where did john go?
1 2 3 4 5
2. I’m certain professor brown wants to see you, but he’s in class just
now. 1 2 3 4
12

The validity of paper and oencil objective techniques remains largely


unproven such techniques should therefore be used with caution and certainly
never as the sole measure of oral proficiency. The techniques of eliciting and
rating highly structured speech samples shows much promise, but such testing is
still in the experimental stage and requires very great test-writting skill and
experience. The scored interview, though not so reliable a measure as we would
wish for, is still probably the best technique for use in relatively informal, small
scale testing situations and ways can be shown for substantially improving the
effectiveness of this testing device.

E. Criteria Levels of Performance


The fact that particular grammatical structured are not specified as content,
and tht there is no reference to vocabulary or pronunciation, does not of course
mean that there are no requirements with respect to these elements of oral
performance. These would be dealt with separately as part of a statement of
criterial levels. Thus for the RSA test st international level :
1. Accuracy

Pronunciation still obviousely influenced by L1 though clearly intellegible.


Grammatical/ lexical accuracy is generally high, though some errors which do not
destroy communication are acceptable.

2. Appropriacy

Use of language generally appropriate to function. The overall intention of the


speaker is always clear.

3. Range

A fair range of language is available to the candidate. He is able to express


himself without overtly having to search for words.

4. Flexibility
13

Is able to take the initiative in a conversation and to adapt to new tipics or


changes of direction though neither of these may be consistently manifested.

5. Size
Most contributions may be short, but some evidence of ability to produce
more complex utterances and to develop these into discourse should be
manifested.7

F. The Principle Of Teaching Speaking

Several key principles should be applied to teaching a speaking class. The first
principle is that, to make sure the teaching takes place in an intended way, it is critical
to create a high level of motivation. That is the key consideration in the determining
the preparedness of learners to communicate. Motivation is the combination of effort
plus desire to achieve the goal of learning plus favorable attitudes toward learning
the language. So effort alone does not signify motivation but it is the desire and the
satisfaction in the activity that count (Nunan, 1999: 233). In order to make students
feel satisfied and have the desire to get involved in the lesson, teachers should do the
following things.
First, teachers use the instinct or experience, depending on the teacher‟s
qualification, to choose interesting topics in order to draw students‟ attention and
make inspiration. Productive skills cannot be develop beyond meaningful contexts.
In addition, unreal contexts cannot help students get involved in such real life
activities as job and academic settings (Green, 1995).
Second, teachers can create interest in the topic by talking about the topic and by
communicating enthusiasm. Teachers can ask if anyone knows about the topic and
can therefore tell the others about it before the activities start. In this way, students
have chances to express their ideas meaningfully and teachers can exploit their
previous knowledge to get them into the lesson. Also, teachers can ask students to
make guesses about the content and to discuss what happens which inspire students

7
Arthur Hugles. Testing for Language Teachers(New York : Cambridge University Press) P. 102.
14

curiosity and their wanting to find out the truth. So they have a reason to attend to the
lesson and to talk for themselves. Additionally, teachers can ask several guiding
questions before the activity and provide necessary information without telling what
students have already known to create stronger motivation. (Harmer, 2002: 253).
Third, motivation is raised in a lesson also by the fact that teachers help to create
a relaxed, non-anxious atmosphere which helps even weak and reluctant students.
This can be done through some activities such as playing guessing games, doing the
rehearsal in small groups before speaking in front of many people, or practicing
speaking under the guidance of the teachers through drills, repetition, mechanical
exercises first (Harmer, 1999: 234,235). In the case students feel fear of mistakes,
teachers can encourage them to take risk and focus on content rather than form.
Fourth, teachers should give appropriate level of difficulty, not too difficult nor too
easy for students may feel bored. And finally, teachers had better employ meaningful
learning with meaningful activities relevant to the real life to get students to talk about
themselves. The second principle is, when students are motivated enough to get
involved in the lesson, teachers should give them the maximum number of
opportunities possible to practice the target language in meaningful contexts and
situations which helps to facilities acquisition for all learners rather than grammatical
explanation or linguistics analysis(Nunan, 1999: 241). It is because learners must
learn to develop the ability to use language to get things done in real life, outside the
classroom.8

G. Approches to Assessing Speaking

Two methods are used for assessing speaking skills.In theobservational


approach, the student's behavior is observed andassessed unobtrusively. In the
structured approach, the studentis asked to perform one or more specific oral
communicationtasks. His or her performance on the task is then evaluated.The
task can be administered in a one-on-one settingwiththe test administrator and one

8
Nguyen ThiTuyetAnh, The Key Principles for Development of Speaking. IJSELL. Volume 3, Issue 1,
January 2015. P. 50.
15

studentor in a group or classsetting. In either setting, students should feel that they
arecommunicating meaningful content to a real audience. Tasksshould focus on
topics that all students can easily talk about,or, if they do not include such a focus,
students should begiven an opportunity to collect information on the topic.
Both observational and structured approaches use a varietyof rating systems. A
holistic rating captures a general im-pression of the student's performance. A
primary trait scoreassesses the student's ability to achieve a specific communica-
tion purposefor example, to persuade the listener to adopta certain point of view.
Analytic scales capture the student'sperformance on various aspects of
communication, such asdelivery, organization, content, and language. Rating
systemsmay describe varying degrees of competence along a scale ormay indicate
the presence or absence of a characteristic.
A major aspect of any rating system is rater objectivity: Isthe rater applying the
scoring criteria accurately and con-sistently to all students across time? The
reliability of ratersshould be established during their training and checked
duringadministration or scoring of the assessment. If ratings are madeon the spot,
two rates will be required for some administra-tions. If ratings are recorded for
later scoring, double scoringwill often be needed.

H. How Should Assessment InstrumentsBe Selected or Designed?

Identifying an appropriate instrument depends upon thepurpose for


assessment and the availability of existing in-struments. If the purpose is to assess
a specific set of skillsfor instance, diagnosing strengths and weaknesses or
assessingmastery of an objectivethe test should match those skills.If appropriate
tests are nut available, it makes sense to designan assessment instrument. to reflect
specific needs.If thepurposeis to assess communication broadlyfor
instance,evaluating a new program or assessing district goals the testshould
measure progress over time and, if possible, describethat progress in terms of
external norms, such a: nationalor state norms. In this case, it is useful to seek out
16

a pertinenttest that has undergone careful development, validation, andnorming,


even if it does not exactly match the local program.
Several reviews of oral communication tests are available(Rubin and Mead 1984).
The Speech Communication Association has compiled a set of Resources for
Assessment in Com-munication, which includes standards for effective oral com-
munication programs,criteriafor evaluating instruments,procedures for assessing
speaking and listening, an annotatedbibliography, and a list of consultants.9

CHAPTER III
CLOSING
Conclusion

The basic problem in testing oral ability is essentially the same as for testing writting.
We went to set tasks that form a representative sample of the population of oral tasks that
we expect candidates to be able to perform. The tasks should elicit behaviour which truly
represents the candidates ability and which can be scored validly and reliably.
Many people do not know how to test the quality of questions that tested or will be tested.
It is also not free from their ignorance of what is language testing or language tests.
Language testing needs to be known because it would provide the basis for language
testing. Language testing is the practice and study of Evaluating the proficiency of an
individual in a particular language using Effectively.

99
Mead, Nancy A.; Rubin, Donald L.Assessing Listening and Speaking Skills.Eric
digest(washington)1985.
17

REFFERENCES

1. Hughes,Arthur, Testing Language Teachers(new york : cambridge University


Press) 1989.
st
2. Brown ,Dougles. Teaching by principles 3 ed (san fransisco state
university)2007.

3. Ulfah rustan, language testing,


https://www.academia.edu/29176164/MAKALAH_LANGUAGE_TESTING
4. Ani dwi wahyuni. Theorical review : speaking 2016
18

5. David P. Harris. Testing english as a second language (United States Of America


: McGraw-Hill Book Company) 1969
6. Nguyen ThiTuyetAnh, The Key Principles for Development of Speaking. IJSELL.
Volume 3, Issue 1, January 2015
7. Mead, Nancy A.; Rubin, Donald L.Assessing Listening and Speaking Skills.Eric
digest(washington)1985.

S-ar putea să vă placă și