Sunteți pe pagina 1din 18

Human Performance

ISSN: 0895-9285 (Print) 1532-7043 (Online) Journal homepage: http://www.tandfonline.com/loi/hhup20

The Relative Resistance of the Situational,


Patterned Behavior, and Conventional Structured
Interviews to Anchoring Effects

Heloneida C. Kataoka , Gary P. Latham & Glen Whyte

To cite this article: Heloneida C. Kataoka , Gary P. Latham & Glen Whyte (1997) The Relative
Resistance of the Situational, Patterned Behavior, and Conventional Structured Interviews to
Anchoring Effects, Human Performance, 10:1, 47-63

To link to this article: http://dx.doi.org/10.1207/s15327043hup1001_3

Published online: 13 Nov 2009.

Submit your article to this journal

Article views: 112

View related articles

Citing articles: 7 View citing articles

Full Terms & Conditions of access and use can be found at


http://www.tandfonline.com/action/journalInformation?journalCode=hhup20

Download by: [Mr John Track] Date: 07 August 2016, At: 05:34
HUMAN PERFORMANCE, 10(1), 4 7 4 3
Copyright Q 1997, Lawrence Erlbaurn Associates, Inc.

The Relative Resistance of the


Situational, Patterned Behavior, and
Conventional Structured Interviews to
Anchoring Effects
Downloaded by [Mr John Track] at 05:34 07 August 2016

Heloneida C. Kataoka, Gary P. Latham, and Glen Whyte


Faculty of Management
University of Toronto

Selection interviews are decision-making tools used in organizations to make hiring


and promotion decisions. Individuals who conduct such interviews, however, are
susceptible to deviations from rationality that may bias interview ratings. This study
examined the effect of the anchoring-and-adjustmentheuristic on the ratings given
to a job candidate by interviewers (n = 190) using 3 different types of interview
techniques: the conventional structured interview, the patterned behavior description
interview, and the situational interview. The ratings of interviewers who were given
a high anchor were significantly higher than the ratings of interviewers who were
given a low anchor across all three interview techniques. The effect of the anchoring
manipulation, however, was significantly less when the situational interview was
used.

Interviews are widely used in organizations to make selection decisions (Eder,


Kacmar, & Ferris, 1989; Harris, 1989). In unstructured interviews, the questions
asked of candidates are not predetermined, and are frequently not even job related.
The interview often takes the form of a free-flowing conversation. As a result, the
unstructured interview usually has both low reliability and validity (Mayfield,
1964; Ulrich & Trumbo, 1965). Cronshaw and Wiesner (1989) concluded that
unstructured interviews are so flawed as assessment techniques that no further
research effort should be expended on them. Instead, research should focus on
different types of structured interviews. Researchers have, in general, followed this

Requests for reprints should be sent to Glen Whyte, University of Toronto, Faculty of Management,
Rotman Centre for Management, 105 St. George Street, Toronto, Canada M5S 3E6.
48 KATAOKA, LATHAM, WHYTE

advice and used various operationalizations of structure (Campion, Pursell, &


Brown, 1988; Huffcutt & Arthur, 1994).
Regardless of the extent or manner in which interviews are structured, inter-
viewers are faced with the task of attempting to evaluate the quality of a candi-
date's responses to interview questions. Considerable research in behavioral
decision theory indicates that people use heuristics, or simplifying rules of thumb,
when making such judgments regarding the likelihood or value of events under
uncertainty (Tversky & Kahneman, 1974). Reliance on judgmental heuristics is a
robust form of behavior in decision making that has been demonstrated in areas
such as auditing (Johnson, Jamal, & Bemyman, 1991; Joyce & Biddle, 1981),
Downloaded by [Mr John Track] at 05:34 07 August 2016

negotiations (Bazerman, 1990; Huber & Neale, 1986), utility analysis (Bobko,
Shetzer, & Russell, 1991), gambling (Lichtenstein & Slovic, 1971), sales predic-
tion (Hogarth, 1980), and predictions of spousal consumer preference (Davis,
Hoch, & Rogsdale, 1986). Heuristics simplify complex judgmental tasks, but they
may also introduce bias or systematic error when they are inappropriately used
(Bazerman, 1990).

ANCHORING AND ADJUSTMENT

Anchoring and adjustment refers to the heuristic that people unconsciously rely on
when estimating the value of something when that value is unknown or uncertain
(Tversky & Kahneman, 1974). This is essentially the task that is required to be
performed by an interviewer who is rating the quality of an interviewee's responses.
People typically begin the estimion process by selecting an anchor, from which
they then adjust to arrive at a final estimate. Such adjustment, however, tends to be
insufficient, with the result that final estimates are biased in the direction of the
initial estimate. Most detrimental to the accuracy of final estimates, however, is the
tendency for people to choose anchors because they are handy rather than because
they are relevant (Bazerman, 1990).
Anchoring is a robust phenommn that has been o b w ~ e in d many domains and
tasks, including assessing p r ~ i 1 k i e s(Edwards, Lindman, & Phillips, 1965;
Lopes, 1985, 1987; Peterson & DuCharme, 1967; Wright & Andemon, 1989),
making predictions based on historical data (Sniezek, 19881, making utility assess-
ments (Johnson & Schkade, 1988; Shanteau & Phelps, 1979), exercising clinical
judgment (Friedlander & Stockman, 1983; Z u c k e m , Kwtner, Coldla, & Alton,
1984), infemng causal attributions (Quattrone, 1982),estimating confidenceranges
(Block & Harper, 1991), malung accounting-relatedjudgments (Butler, 1986).goal
setting (Mano, 1990), malung motivation-related judgments (Cervone & bake,
1986; Switzer & Sniezek, 1991), belief updating and change (Einhorn & Hogarth,
1985; Hogarth & Einhorn, 1989), evaluating product bundles (Yadov, 1994), and
determining listing prices for houses (Northcraft & Neale, 1987).
ANCHORING EFFECTS 49

In most anchoring studies, individuals have been asked to provide a numerical


estimate regarding the frequency of a class or the value of an object when that
frequency or value is unknown or uncertain. For example, Tversky and Kahneman
(1974) asked people to estimate the percentage of African countries in the United
Nations. Arbitrary estimates were initially provided to participants, who were then
asked whether the actual percentage was higher or lower than this number. These
initial estimates were determined in the participants' presence by the spin of a
roulette wheel. The median final estimate of participants who received 10% as the
initial estimate was 25%. In contrast, the median final estimate of participants who
received 65% as the initial estimate ~ ~ 4 5 Thus,
% . even though the initial estimates
were clearly irrelevant to the task at hand, people used them as anchors and failed
Downloaded by [Mr John Track] at 05:34 07 August 2016

to adjust them sufficiently when arriving at final estimates. This effect occurs even
when people are provided with monetary incentives to make accurate estimates
(Kahneman, Slovic, & Tversky, 1982).
We suggest that interviewers will be susceptible to anchoring effects in part
because of evidence from the literature on performance appraisal indicating that
evaluations are often biased in the direction of previous evaluations (Smither,
Reilly, & Burden, 1988). Past evaluations serve as an anchor for current evaluations
that are only partially revised in the face of new evidence (Murphy, Balzer,
Lockhart, & Eisenman, 1985).
Anchoring effects are distinct from other sources of bias, such as contrast effects.
Both Wexley, Sanders, and Yukl(1973) and Latham, Wexley, and Purse11 (1975)
found that a decision to hire an applicant based on a selection interview is made by
contrasting the person's qualifications with other applicants. Thus a person who
was shown to be a 5 (marginally acceptable) on a 9-point scale when evaluated
againstjob requirements was judged to be a 3 or lower when preceded by two highly
qualified applicants. Similarly, the same person was assessed as a 7 or higher when
preceded by two unqualified applicants. Anchoring effects, however, may occur
independent of the presence of other applicants.
This study investigated the resistance of the conventional structured interview
(CSI), the patterned behavior description interview (PBDI), and the situational
interview (SI) to bias due to reliance on the anchoring-and-adjustment heuristic.
When rating the quality of an interviewee's answer to an interview question,
interviewers are engaged in the type of task that can be accomplished through
anchoring and adjustment. Different interviewers, however, may be using different
anchors when judging the quality of a candidate's response. For example, inter-
viewers may employ what they consider to be an excellent answer as an anchor,
and then compare the candidate's answer to it. In contrast, interviewers may employ
what they consider to be a poor answer as the anchor. The rating of a candidate's
response will likely manifest the effect of anchoring and adjustment regardless of
the anchor used. The extent to which an interview technique is structured, however,
may minimize anchoring effects.
50 KATAOKA, LATHAM, WHYTE

STRUCTURED INTERVIEWS

The CSI typically consists of a series of job-related questions that are ptesented to
each candidate (Maurer & Fay, 1988). The questions focus on job responsibilities,
duties, job knowledge, and achievements in previous jobs, but they are not neces-
sarily based on a formaljob analysis. The validity of structured interviews,however,
is increased when they are based on a formal job analysis (Wiesner & Cronshaw,
1988). Two types of structured interview techniques that rely on a job analysis are
the SI (Latham, 1989) and the PBDI (Janz, 1989).
The SI is derived from goal setting theory and is based on the premise that
Downloaded by [Mr John Track] at 05:34 07 August 2016

intentions predict behavior (Locke & Latham, 1990).Interview questions using this
method are determined from the results of a job analysis using the critical incident
technique (Ffanagan, 1954). Job candidates are asked what they would do in
response to a series of job-related critical incidents. Each incident contains a
dilemma that is designed to elicit the candidates' intentions. A distinguishing
feature of the SI is that it provides a behavior-based scoring guide for interviewers
to use when evaluating candidates' responses.
The PBDI, in contrast, is based on the premise that the best predictor of future
behavior is past behavior. As with the SI, PBDI questions are based on the results
of a job analysis using the critical incident technique. Candidates are typicalty
presented with the criterion dimension of interest to the employer, are asked to
recall a relevant past incident, and to describe the actions that they took todeal with
it. A scoring guide is neither practical nor typically used with a PBDl because of
the wide variability in responses that are obtained in descriptions of each candi-
date's personal history, and it has been eschewed by Janz (1989).
In contrast to the CSI, the PBDI and the SI focus explicitly on behavior. For
example, with the PBDI, candidates are asked to describe a specific situation W t
occurred in the past, and to describe the actions that they took in response to it (Janz,
1989). With the SI, a specific situation is presented to candidates, who are then
asked to describe what action they would take if faced with that situation.
Research on structured interviews has focused primdly on issues of reliability
and validity (Janz, 1982;Latham & Saari, 1984; Lathm, Saari, Pursell, & Campion,
1980; Latham & Skarlicki, 1995; Orpen, 1985; Weekley & Gier, 1987). Relatively
little attention, however, has been paid to the issue of freedom from bias (Harris,
1989). This is surprising, given that bias attenuates both reliability and validity
(Thorndike, 1949).
Four studies have investigated bias in the SI. Maurerand Fay (1988) investigated
the ability of interview structure and interviewer training to reduce the effects of
errors such as halo, contrast, similar-to-me, and first impressions on rating variabil-
ity. Even though no training effect was found, greater agreement was found amang
the ratings obtained from the SI than from those obtained with the CSI. These
findings suggest that the SI IS more robust to the effects of bias than the CSI.
ANCHORING EFFECTS 51

Lin, Dobbins, and Farh (1992) investigated bias due to similarities between
interviewers and interviewees in terms of race and age. Stronger same-race effects
were found for the CSI than for the SI. Neither the CSI nor the SI were affected by
age similarity. In a study of police officers, Maurer and Lee (1994) found that the
SI minimized contrast effects on the accurate assessment of information provided
by multiple candidates.
Only one study has investigated the resistance of the PBDI to bias. Latham and
Skarlicki (1996) examined the effectiveness of the CSI, the PBDI, and the SI in
minimizing the similar-to-me bias of francophone managers in Quebec. This bias
was not apparent when either the SI or the PBDI were used, but it did occur when
the CSI was used.
Downloaded by [Mr John Track] at 05:34 07 August 2016

INTERVIEW STRUCTURE

Compared to the SI and the PBDI, the CSI lacks structure (e.g., a scoring guide)
and a sole focus on behavior. As a result, the CSI has only a relatively loose
framework that interviewers can rely on to assess the quality of an interviewee's
responses. In contrast, the structure and behavioral focus of the PBDI and SI may
reduce the likelihood that an interviewer will rely on an inappropriate anchor in
assessing the responses of an interviewee. In relation to the PBDI and the SI, the
CSI therefore may be more susceptible to problems such as those caused by
anchoring effects. Inherent in the application of the SI is the use of a scoring guide.
The scoring guide increases the degree of structure in the SI relative to both the
CSI and the PBDI. Therefore the SI may be less susceptible to an anchoring
induced bias than are the other .two techniques. Biases are partly responsible for
disagreements among decisions made by different interviewers (Maurer & Fay,
1988), because bias affects different interviewers differently. The PBDI, which
typically lacks a scoring guide, has modest interrater reliability coefficients in the
range of 0.49 (Janz, 1989), whereas the SI has shown much higher coefficients,
ranging from 0.76 to0.96 (Latham, 1989).Thus, the hypotheses tested in this study
were as follows:

H1: Interviewer ratings of identical responses to interview questions will be.


significantly more favorable when interviewers are provided with a high
rather than a low anchor.
H2: Interview type will moderate the extent to which interviewers will be
affected by anchoring. More specifically, the PBDI will be more resistant
to anchoring effects than the CSI.
H3: The SI will be more resistant to anchoring effects than the CSI.
H4: The SI will be more resistant to anchoring effects than the PBDI.
52 KATAOKA, LATHAM, WHYTE

METHOD

Design and Sample

To determine the influence of anchoring effects on interviewers using the CSI,


PBDI, and SI, a 3 x 3 (Anchor x Interview type) between-subjects factorial design
was used. Each participant was randomly assigned to e i k a low, high, or control
anchor condition, and either a CSI, PBDI, or SIcondition, making both anchor and
interview type 3-level between-subject factors.
A total of 190 participants (94 women and 96 men) were involved in the study.
Downloaded by [Mr John Track] at 05:34 07 August 2016

All participants were graduate students of business administration enrolled in an


MBA program in a large North American university. The participants had an
average of approximately 6 years of full-time work experience. Their average age
was 29 years. MBA students were used as a means of increasing the external validity
of the results (Gordon, Slade, & Schmitt, 1986).

Procedure

Video-taped simulated interviews, one for each interview type, were used to
maintain uniformity in both candidate's answers and behavior during the interview
(Ilgen, 1986). All participants in each interview condition watched the same
videotape showing a candidate being interviewed for a position as atakler in a bank.
This job was chosen because the job requirements are straightlFmatd (e.g.,
prioritize requests, handle complaints). Therefore, prior familiarityof theinterview-
ers with the job was not necessary.
To maintain consistency across interview type, the questions used in each
interview format were written to address the s m job dimension. To check for
consistency in this regard across interview type, organizkonail behavior doctoral
students (n = 5) were given the scripts for each of the three interviews and asked
whether they agreed that the questions tapped the same undmlying dimensions.
Answers were given using a 5-point Likert-type scale ranging from 1 (stmngly
disagree) to 5 (strongly agree). There was high agreement that the questions
represented similar dimensions across interview type (M = 4.4, SD = 0.5).
Candidate responses to interview questions were &SO developed. Average
answers, as opposed to either excellent or poor answers, were written for each of
the interview questions. Because the purpose of this research was to investigate the
effects of anchoring on interviewer ratings of candidate responses, this step would
potentially allow both the high and low anchor manipulations to demonstrate an
impact toward both ends of the rating scales.
The procedure followed to develop candidateresponsesto interviewerquestions
was similar to the one used by Maurer and Fay (1988). The answers were first
developed for the SI, and were written to be comparable to what would be described
ANCHORING EFFECTS 53

in the scoring guide as an average response. Care was taken so that the candidate
responses did not correspond verbatim to the behaviors referred to in the guide,
because this would rarely if ever happen in practice. After developing the answers
to the SI, comparable answers were then generated for the PBDI and the CSI.
Examples of comparable SI, PBDI, and CSI questions and their respective
answers are given here:

SI question: A client has complained to your boss about your recent


presentation. The client said that you were not able to answer basic questions.
Your boss calls you to discuss the problem. What would you say?
Interviewer Scoring guide: (1) l l a t surprises me. I'm sony-it won't
Downloaded by [Mr John Track] at 05:34 07 August 2016

happen again. (3) Yes, I know-I didn't feel it went well either. What do you
suggest I do to improve? (5)Yes, I realized it didn't go well. I'd like to call
the client and follow up. But first, I'd like any suggestions you may have.
Answer: I would admit that the presentation went poorly, and I would try
to talk to others to get some advice that would help me to make a better
presentation the next time.
PBDI question: Tell me about a time when a client complained to your
boss that you were not prepared for a presentation that you made to the client?
What were the circumstances? What did you do? What was the outcome?
Who can I call to verify this information?
Answer: My boss asked me to make a presentation to a client about a
subject that I was not very familiar with. Unfortunately, during the presenta-
tion the client asked me some questions that I was unable to properly answer.
As a result, after the meeting the client complained to my boss about my
performance. My boss then called me into his office and told me about the
client's complaints. I admitted that the presentation went poorly, and I said I
was going to try to talk to others to get some advice that would help me to
make a better presentation the next time.
CSI question: How do you respond when your boss tells you that a client
has complained that you were unprepared for a presentation that you made
to the client?
Answer: I admit that the presentation went poorly, and say that I will try
to talk to others to get some advice that can help me to make better
presentations.

One video tape for each interview type was recorded using the same setting and
the same actor in the role of the job candidate to hold differences across conditions
constant. Only the candidate could be seen on tape. The interviewer could be heard
but not seen. The participants were asked to observe the videotape and evaluate the
candidate's answers. This procedure was modeled after one that is used by the
president of a bank to give final approval to the selection of tellers.
54 KATAOKA, LATHAM, WHYTE

To the greatest extent possible, uniformity in the answers was maintained across
each of the interview formats. Uniformity of answers across interview c d t i a n s
was confirmed by a one-way analysis of variance (APJOVA) on the msan ratings
of interviewee responses in the control conditions of each interview technique. No
statistically significant difference was obtainud in i n w i e w ratings regardless of
the interview technique used, F(2,61) = 0.48, p < .62.
Each participant received a booklet of experimental materials. The term booklet
denotes each of the nine unique sets of axperiaemtd mataids used in this study.
Each booklet contained a set of instructions; a questionnaire tailored to either the
CSI, PBDI, or SI formats; rating scales for each interview question r-r-rg fkam 1
Downloaded by [Mr John Track] at 05:34 07 August 2016

(poor) to 5 (good); the anchor manipulation (low, control, and high); and manipu-
lation check questions.
Anchor was manipulated in the followingway. A variation of this techmiqm has,
in a different context, successfully induced anchoringef%ects(e.g., Joyce & Diddle,
1981).

Highanchor condition. Prior to rating each answer of the candidate to the


interview questions, the participants were asked:
Does the applicant's answer rate a score of 5? (5 = good)
(a) Yes, the applicant's answer rates a score of 5. (b) No, the applicant's answer
rates a score of less than 5.
If you chose (b), please rate the appiicant's answer on a scale ranging from 1
(poor) to 5 (good).

Low-anchor condition. Prior to rating each of the candidate's answers to the


interview questions, participants in this condition were asked:
Does the applicant's answer rate a score of l? (1 =poor)
(a) Yes, the applicant's answer rates a score of 1. (b) No, the applicant's answer
rates a score of greater than I .
If you chose (b), please rate the applicant's answer on a scale ranging from 1
(poor) to 5 (good).

Control condition. Partleipants were simply asked to rate the candidate's


answers to each of the interv~ewquestions according to a rating scale ranging from
1 (poor) to 5 (good).

RESULTS

Dependent Variable

The overall rating received by the candidate was calculated as the:sum of the SCOT~S
assigned to each of the 10 interview questions. Thus, the candidate's total score
ANCHORING EFFECTS 55

could range from 10 to 50. Mean scores for each interview type and anchor
condition are shown in Table 1.

Manipulation Checks

After watching the interview and rating the candidate's answers, participants
completed single-item scales designed to investigate their perceptions of the
candidate's behavior during the interview. One-way ANOVAs on responses to the
scales revealed that the candidate's behavior was perceived consistently across
conditions. That is, there was no significant difference in how participants across
Downloaded by [Mr John Track] at 05:34 07 August 2016

interview type viewed the candidate in terms of enthusiasm, F(2, 187) = 0.70, p c
SO; friendliness, F(2, 187) = 2.06,p c .13;confidence, F(2, 187) = 0.07, p c 0.93;
concern, F(2, 187) = 2.33, p < 0.11; attention, F(2, 187) = 1.58, p c 0.21; and
sincerity, F(2, 187) = 1.43,p < .24.

Statistical Analyses

Planned comparisons were used to test the four hypotheses of this study. This
method focuses on smaller designs of interest extracted from the original factorial
design (Keppel & Zedeck, 1989). Rather than focusing on an overall omnibus F
test, this method was chosen because it allows the researcher to focus on meaningful
components of the design and directly test the specific hypotheses of the study. It
also reduces the possibility of committing a Type I error (Keppel, 1991).
To test the first hypothesis, analyses of simple effects of the anchoring manipu-
lation on each interview type were conducted. One-way ANOVAs revealed that

TABLE 1
Means and Standard Deviations for Overall Ratings
of Candidate Responses According to Interview Type

Anchor

Low Control High

Interview Type M SD n M SD n M SD n

CSI 29.21 5.30 24 32.95 6.95 19 38.10 5.69 21


PBDI 31.62 7.42 21 33.52 6.22 21 39.10 7.06 20
SI 30.62 3.37 21 31.82 4.16 22 33.75 2.70 21

Note. CSI = conventional structured inteniew; PBDI = patterned behavior description in-
terview; SI = situational interview.
56 KATAOKA, LATHAM, WHYTE

for all three interview types, anchor had a significant effect on interviewer ratings:
CSI, F(2,61) = 1 2 . 5 0 , <
~ .01; PBDI, F(2,59) = 6 . 4 4 , <
~ .01; SI, F(2,61) =4.38,
p < .02.Therefore, the first hypothesis was supported. The effect size for the S1,
however, is medium (w2 2 0.06; Cohen, 1977),whereas the effect sizes for the PBDI
and CSI are large (w2 2 0.15). w2 for the CSI, PBDI, and SI were 0.27,0.15, and
0.10, respectively. Figure 1 illustrates anchoring effects on interview ratings for the
three interview types investigated.
To further investigate the relative susceptibility of each interview type to
anchoring, E tests were conducted on the differencesbetweon the variance of ratings
pooled according to interview type. The variance of interviewer ratings was
Downloaded by [Mr John Track] at 05:34 07 August 2016

significantly less for the SI than for either the CSI, F(63.63) = 3.62, p c .01, or the
PBDI, F(63,61= 4.22,p c .01. Thedifference in the vdance of interviewer ratings
between the PBDI and the CSI, however, was not significant, F(61,63) = 1.l6, p
< .lo. These results suggest that ratings in the SI condition were less affected by
anchoring than ratings in the other interview conditions. The higher degree of
agreement among raters in the SI condition than in the CSI condition also replicates
the results obtained by Maurer and Fay (1988).
The second, third, and fourth hypotheses of this study suggested that an increase
in interview structure would decrease the effect of anchoring.Three 2 x 2 (Interview
Type x Anchor) ANOVAs were conducted. The first tested whether anchoring

,PBDI

27.5 1 L I I
LOW CONTROL HIGH
ANCHOR

FIGURE 1 Mean overall ratings by interviewers in low, control, and high anchor conditions.
Downloaded by [Mr John Track] at 05:34 07 August 2016
58 KATAOKA, LATHAM, WHYTE

effects were more pronounced in the ratings of interviewers using the CSI compared
with those using the PBDI for both high and low anchor conditions. An answer to
this question can be determined with regard to the existence of a significant
Interview Type x Anchor interaction effect (Keppel, 1991). The interaction test
shows whether the simple effects of anchoring may be considered the same (no
interaction) or different (interaction).The results revealed that the interaction term
was not significant,thus indicating no significant difference between the PBDI and
the CSI in terms of resistance to the effects of anchoring, F(l,82) = 0.26, p < .61.
Hypothesis 2 was therefore rejected.
The second 2 x 2 (Interview Type x Anchor) ANOVA compared the ratings
Downloaded by [Mr John Track] at 05:34 07 August 2016

obtained in the S1 condition with those obtained in the CSI condition for both high
and Iow anchor conditions. The results in this case revealed a significantinteraction
effect, F(1, 83) = 8.91, p < .01. These results indicate that although the SI is still
susceptibleto anchoring effects, it was more resistant than was the CSI. Hypothesis
3 was thus supported.
The third 2 x 2 (Interview Type x Anchor) ANOVA compared the ratings
obtained in the SI condition with those of the PBDI. The results revealed a
marginally significant interaction, F(l, 79) = 3.19, p < 0.07, suggesting that the SI
is more resistant to anchoring effects than the PBDI (Hypothesis 4).
The results of the analyses of interactions between the different types of
interviews are shown in Figure 2.

DISCUSSION

This study showed that interviewers, when rating the responses of job candidates
to interview questions, are susceptible to anchoring effects when employing struc-
tured interview techniques. Candidate ratings were biased in the direction of the
anchor provided to the interviewer, regardless of the interview technique that was
used. This bias, however, was significantly less when the SI was used as compared
to the CSI and the PBDI.
The relative resistance of the SI to anchoring effects as compared with the PBDI
is likely attributableto the use of a scoring guide. Scoringguides may tend to reduce
anchoring effects because the behaviors appearing on the guides are themselves
"anchors" designed to assist raters in making judgments. These referents may serve
to decrease the likelihood of inappropriate anchoring. That anchoring &fx:ts from
irrelevant sources were not entirely eliminated by the use of a scoring guide,
however, is testimony to the strength of what is clearly a robust phenomenon
(Bazerman, 1990).
This study contributes to knowledge in three ways. First, it links the selection
interview literature with the literature on individual decision making in a way that
increases our understanding of the selection interview, while adding to our under-
ANCHORING EFFECTS 59

standing of the boundary conditions of the process of anchoring and adjustment.


Although some work has been done linking the individual decision-making litera-
ture with human resource management in general (e.g., Northcraft, Neale, & Huber,
1989), and with selection decisions specifically (e.g., Huber, Northcraft, & Neale,
1990), the effects of cognitive biases on different types of structured selection
interviews have not been previously investigated.
Second, the results of this study suggest a way to debias human judgment. There
has been considerable research demonstrating the existence of bias in decision
making, but relatively less effort devoted to examining how to reduce it. Also, the
results of research on debiasing judgment has not been overly encouraging. This
study, however, represents some good news in the sense that it indicates how one
Downloaded by [Mr John Track] at 05:34 07 August 2016

source of bias, the anchoring heuristic, can be reduced by the use of a scoring
guide.
Third, the study has practical implications for managerial behavior. It dernon-
strates that a job candidate may be judged more favorably than justified because
the interviewer is using a high anchor when rating candidate responses to interview
questions. The result in such a case could be an inappropriate hiring decision.
Similarly, a decision not to hire a suitable candidate could result if the interviewer
uses a low anchor to assess answers to questions asked during the interview process.
In either case, the result is negative for the organization.
Although one could question the extent to which high motivation to make
accurate decisions affects the anchoring process, evidence suggests that the intro-
duction of substantial incentives to make accurate choices does not eliminate
systematic errors of judgment (e.g., Grether & Plott, 1979; Slovic & Lichtenstein,
1983). Incentives narrow attention and increase deliberation (Tversky & Kahne-
man, 1986).Because people rely unconsciously on anchors to make estimates under
uncertainty (Tversky & Kahneman, 1974),it is not at all clear how incentives would
reduce anchoring effects. The external validity of the present findings is therefore
an issue for further research.
Another potential limitation of this study is the fact that each participant saw
only one videotape of a single candidate.Although the participants had considerable
years of work experience, a stronger test of the hypotheses would involve a field
study that included different candidates for differentjobs.
Future research should examine the extent to which anchoring effects influence
the outcome of different structured interview techniques when interviews are
conducted and decisions are made by a panel. Examination of the extent to which
group decision making may amplify or reduce the effect of heuristics on interviewer
ratings should prove to be interesting and worthwhile (e.g., Argote, Seabright, &
Dyer, 1986; Whyte, 1993).
The extent to which structured selection interviews are free from bias has
received little research attention. To further explore this question, this study
proposed that even structured selection interviews can be understood as fertile
60 KATAOKA, LATHAM, WHYTE

ground for the occurrence of the cognitive biases that characterize individual
decision making. These biases potentidly reduce the reliability and validity of the
selection interview. Through an understanding of the causes and cossequcnces of
these biases, the employment interview as a selection device can be improved.

ACKNOWLEDGMENT

?his research was supported by a SSHRC grant.


Downloaded by [Mr John Track] at 05:34 07 August 2016

REFERENCES

Argote, L.,Seabright, M. A., & Dyer, L. (1986). Individual versus group use of base-rate and
individuating information. O r g m i z a i o w i Behuvlor und Human Decision Prucesses, 38, 65-75.
Bazerman, M. H. (1990). Judgment in manugerial decision nruking. New York: Wiley.
Block, R. A., & Harper, D. R. (1991). Overconfidence in estimation: Testing the anchoring and
adjustment hypothesis. Organizational Behavior cuad Human Decisiurp Processes, 49, 188-207.
Bobko, P., Shetzer, L., & Russell, C. (1991). Estimatingthe statbdvd deviation of professocs' worth:
The effect of frame and presentation in utility analysis. Journai of Occupational Psychology, 64,
179-188.
Butler, S. A. (1986). Anchoring in the judgmental evaluation of audit samples. Accounting Review, 61,
101-1 11.
Campion, M. A., Pursell, E. D., & Brown, B. K.(1988). S t r u c t W interview: Raisingthe p y c h o ~ c
properties of the employment interview. Personnel Psycholugy, 41, 25-42.
Cewone, D.,& Peake, P. K. (1986). Anchoring, efficacy, and action: The influence of juQmiuai
heuristics on self-effxacy judgments and behavior. Journai of Personality Md Social Psychology,
SO, 492-50 1.
Cohen, J. (1977). Statisticul power unulysis jor the behuvwrd sciences. New York: Academic.
Cronshaw, S. F., & Wiesner, W. H. (1989). The validity of the employment interview: Models for
research and practice. In R. W. Eder & F. R. Ferris (Eds.), The employment interview: Thcory,
research, andpructlce (pp. 269-281). Beverly Hills Sage.
Davis, H. L., Hoch, S. J., & Rogsdnle, E. K. (1986). An anchoring and adjwtmcnt model of spousal
prediction. Journal c!f Consumer Research, I3,25-37
Eder, R. W., Kacmar, K. M., & Ferris, G. R. (1989) Employment interview research: History and
synthesis. In R. W. Eder & F. R. Fems (Eds.), The employment interview: Theory, research, and
practice (pp. 17-31). Beverly Hills: Sage.
Edwards, W., Lindman, H., & Phillips, L. D. (1965). Emrging technologies for rmlrieg decisions.In
T. M.Newcomb (Ed.), New directions in psyctwlo~y11(pp. 261-325). New York: Holt, Rinehart
&Winston.
Einhorn, H. J., & Hogarth, R. M. (1985). Ambigu~tyand uncertainty in probabilistic inference.
Psychological Review, 92, 465-461.
Flanagan,J. C. (1954). The cr~ticalincident technique Psychulogccd Bullenn. 51, 327-358.
Friedlander, M. L., & Stockman, S. J. (1983). Anchoring and publicity effects in clinical judgment.
Journal of Clinical Psychology, 39, 637-643.
Gordon, M. E., Slade, L. A , & Schmin, N. (1986) The science of the sophomore revisited: From
conjecture to empiricism.Acudemy ofManagement Review, 11, 191-207.
ANCHORING EFFECTS 61

Grether, D. M., & Plott, C. R. (1979). Economic theory of choice and the preference reversal
phenomenon. Americun Economic Review, 69, 623438.
Harris, M. M. (1989). Reconsidering the employment interview: A review of recent literature and
suggestions for future research. Personnel Psychology, 42, 691-726.
Hogarth, R. M. (1980). Judgment and choice. New York: Wiley.
Hogarth, R. M., & Einhorn, H. J. (1989). Order ejfects in belief updnting: The belief adjustment model.
Working paper, Center for Decision Research, University of Chicago.
Huber, V. L., & Neale, M. A. (1986).Effects of cognitive heuristicsand goals on negotiator performance
and subsequent goal setting. Organizution Behavior and Human Decision Processes, 38,342-365.
Huber, V . L., Northcraft, G. B., & Neale, M. A. (1990). Effects of design strategy and number of
openings on employment selection decisions. Organizational Behavior and Human Decision
Processes, 45, 276-284.
Huffcutt, A. I., & Arthur, W. (1994). Hunter and Hunter (1984) revisited: Interview validity for
Downloaded by [Mr John Track] at 05:34 07 August 2016

entry-level jobs. Journal ($Applied Psychology, 79, 184-190.


Ilgen, D. R. (1986). Laboratory research: A question of when, not if. In E. A. Locke (Ed.), Generalizing
from laboratory to field settings (pp. 257-267). Lexington, MA: Lexington.
Janz, T. (1982). Initial comparisons of patterned behavior description interview versus unstructured
interviews. Journal ($Applied Psychology, 67, 577-580.
Janz, T. (1989). The patterned behavior description interview: The best prophet of the future is the past.
In R. W. Eder & F. R. Fenis (Eds.), The employment interview: Theory, research, andpractice.
Beverly Hills: Sage.
Johnson, P. E., Jamal, K., & Berryman, M. G. (1991). Effects of framing on auditor decisions.
Organization Behuvior und Human Decision Processes, 50, 75-105.
Johnson, E. J., & Schkade, D. A. (1988). Bias in utility assessments: Further evidence and explanations.
Management Science, 35,406424.
Joyce, E. J., & Biddle, G. C. (1981). Anchoring and adjustment in probabilistic inferences in auditing.
Journal of Accounting Research, 19, 120-145.
Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertuinty: Heuristics and biases.
New York: Cambridge University Press.
Keppe1.G. (1991).Design undana1ysis:A reseurcher'shundbook. Englewood Cliffs, NJ: Prentice-Hall.
Keppel, G., & Zedeck, S. (1989). Data unu1ysisji)r research designs. New York: Freeman.
Latham, G. P. (1989). The reliability, vahdity, and practicality of the situational interview. In R. W.
Eder & F. R. Fenis (Eds.), The employment interview: Theory, research, andpractice (pp. 169-182).
Beverly Hills: Sage.
Latham, G. P., & Saari, L. M. (1984). Do people do what they say? Further studies on the situational
interview. Journal ($Applied Psychology, 69, 569-573.
Latham, G. P., S a d , L. M., Pursell, E. D., & Campion, M. A. (1980). The situationalinterview. Journul
of Applied Psychology, 65,422-427.
Latham, G. P., & Skarlicki, D. (1995).Criterion-relatedvalidity of the situationaland patterned behavior
description interviews with organizational citizenship behavior. Human Performance, 8, 67-80.
Latham, G. P., & Skarlicki, D. (1996). The effectiveness of the situational, patterned behavior, and
conventional structured interviews in minimizing in-group favouritism of Canadian francophone
managers. Applied Psychology: An International Review, 45, 177-1 84.
Latham G. P., Wexley, K. N., & Pursell, E. D. (1975). Training managers to minimize rating errors in
the observation of behavior. Journul ($Applied Psychology, 60, 550-555.
Lichtenstein, S., & Slovic, P. (1971). Reversal of preference between bids and choices in gambling
decisions. Journal ofExperimenta1 Psychology, 89, 46-55.
Lin, T. R., Dobbins, G. H., & Farh, J. L. (1992). A field study of Race and Age similarity on interview
ratings in conventional and situational interviews. Journal of Applied Psychology, 77, 363-371.
62 KATAOKA, LATHAM, WHYTE

Locke, E. A., & Latham, G. P. (1990).A theory ofgorrl setring wtd tmkperformance. Englewood Cliffs,
NJ: Prentice-Hall.
Lopes, L. L. (1985). Averaging rules and adjustment processes in Bayesian inference. Bulletin ojthe
Psycho~u)mtcSociety, 23, 509-5 12.
Lopes, L. L. (1987). Procedural debiasing. Acra Psychok~gicu,64, 176185.
Mano, H. (1990).Anticipnted deadline penalties: BffvMs on gad levels and mk performance. In R. M.
Hogarth (Ed.), Insights in decision making @p. l7St76). Chicago: University of Chicago Rcss.
Maurer, S. D., & Fay, C. (1988). Effects of siNational interviews, c o n v e n t i d stmewed interviews,
and training on interview rating agreement: An experimental slnalysis Personnel Psychalogy, 41,
329-344.
Maurer, S. D., & Lee, T. W. (1994). Toward aresolution of contrast error in the employlneat interview:
A test of the situational interview. In D. P. Moore (Ed.), Academy of Management Best Papers
Downloaded by [Mr John Track] at 05:34 07 August 2016

Proceedings 1994 (pp. 132-1 36). Madison, WI: Omnipress.


Mayfield, E. C. (1964) The selection interview: A reevaluation of published resaarch. Personnel
Psychology, 17.239-260.
Murphy, K. R , Balzer, W. K., Lockhart, M. C., & Eisenman, E. J. (1985). Effects of previous
performance on evaluations of present performance.Journal ujApplred Psychology, 7 0 , 7 2 4 .
Northcraft, G . B., & Neale, M. A. (1987). Amateurs, experts, and red estate: An androring-and-adjust-
rnent perspective on property pricing decisions. Organizational Behuvior and Human Decision
Processes, 39, 84-97
Northcraft, G. B., Neale, M. A , & Huber, V. L (1989). The effects of cognitive biases and social
~nfluenceon human resource management decisions. In G. F e m s & K. Rowlaad (Eds.), Research
tn personnel and human resource m g e m e n t . Greenwich, CT: JAI.
Orpen, C. (1985). Patterned behavior description interviews versus unstructured interviews: A com-
parative validity study. Journal ($Applied Psychology, 70, 774-776
Peterson, C. R., & D u C h m , W. M. (1967). Aprimacy effectinsubjectiveprobabilityrevision. Journal
of Experimental Psychology, 73, 61-65.
Quattrone, G. A. ( 1982).Over attribution and unit formation: When behavior engulfs the person. Journal
of Personulrty and Sochl Psychology, 42, 493-607.
Shanteau, J., & Phelps, R. H. (1979) Things just don't add up: The case for subjective additiviry of
utlliry (Psychology Rep. No 79-8, Applied Psychology Series). Manhattan, KA: Kanm State
University
Slovic, P., & Lichtenstein, S. (1983) Preference reversals A b r d e r perspective. American Economic
Review, 73, 596-605
Smither, J. W., Reilly, R R., & Burden, R. (1988). Effect of previouspedorm~nceinformationmratings
of present performance: Contrast versus assimilation revisited. Journal of Applied Psychology, 73,
487-496.
Sniezek, J A. (1988) Prediction w~ths~ngleevent versus aggregate data. Organum~unalBehavior and
Human Decrsion Processes, 41, 196-210.
Switzer, F. S., & Sn~ezek,J. A. (1991). Judgment processes in motivation: Anchoring and adjustment
effects on judgment and behavior Organizatronal Behuvlor and Human Decision Processes, 49,
208-229
Thornd~ke,R L (1949). Personnel selection New York: Wtley
Tversky, A , & Kahnernan, D (1974) Judgment under uncertainty Heuristics and bimes. Science, 85,
1124-1131
Tversky, A., & Kahneman, D (1986) Relattonai choice and the farming of decisions. Journal of
Busmess, 59, S25 1-S278
Ulrich, L , & Trumbo, D (1965). The select~on~nterv~ew s~nce1949 P s y ~ h o l o g ~ Bullet~n,
ol 63,
100-1I6
ANCHORING EFFECTS 63

Weekley, J. A., & Gier, J. A. (1987). Reliability and validity of the situational interview for a sales
position. J o u m l of Applied Psychology, 72, 484-487.
Wexley, K. N., Sanders, R. E., & Yukl, G. A. (1973). Training interviewers to eliminate contrast effects
in employment interviews. Journul vfApplied P.rychology, 57, 233-236.
Whyte, G. (1993). Escalating commitment in individual and group decision making: A prospect theory
approach. Orguniwtionul Behavior und Human Decision Processes, 54.430455.
Wiesner, W. H., & Cronshaw, S. F. (1988). A meta-analytic investigation of the impact of interview
format and degree of structure on the validity of employment interview. J o u r ~ ofl Occupational
Psychology, 61,275-290.
Wright, W. F., &Anderson, U. (1989). Effects of situation familiarity and financial incentives on the
use of the anchoring and adjustment heuristic for probability assessment. Organizational Behavior
and Human Decision Processes, 44, 68-82.
Yadov, M. S. (1994). How buyers evaluate product bundles: A model of anchoring and adjustment.
Downloaded by [Mr John Track] at 05:34 07 August 2016

Journal of Consumer Research, 21, 342-353.


Zuckerman, M., Koestner, R., Colella, M. J., & Alton, A. 0 . (1984). Anchoring in the detection of
deception and leakage. Journal r,f' Personulity und Sociul Psychology, 47, 301-3 11.

S-ar putea să vă placă și