Documente Academic
Documente Profesional
Documente Cultură
To cite this article: Heloneida C. Kataoka , Gary P. Latham & Glen Whyte (1997) The Relative
Resistance of the Situational, Patterned Behavior, and Conventional Structured Interviews to
Anchoring Effects, Human Performance, 10:1, 47-63
Download by: [Mr John Track] Date: 07 August 2016, At: 05:34
HUMAN PERFORMANCE, 10(1), 4 7 4 3
Copyright Q 1997, Lawrence Erlbaurn Associates, Inc.
Requests for reprints should be sent to Glen Whyte, University of Toronto, Faculty of Management,
Rotman Centre for Management, 105 St. George Street, Toronto, Canada M5S 3E6.
48 KATAOKA, LATHAM, WHYTE
negotiations (Bazerman, 1990; Huber & Neale, 1986), utility analysis (Bobko,
Shetzer, & Russell, 1991), gambling (Lichtenstein & Slovic, 1971), sales predic-
tion (Hogarth, 1980), and predictions of spousal consumer preference (Davis,
Hoch, & Rogsdale, 1986). Heuristics simplify complex judgmental tasks, but they
may also introduce bias or systematic error when they are inappropriately used
(Bazerman, 1990).
Anchoring and adjustment refers to the heuristic that people unconsciously rely on
when estimating the value of something when that value is unknown or uncertain
(Tversky & Kahneman, 1974). This is essentially the task that is required to be
performed by an interviewer who is rating the quality of an interviewee's responses.
People typically begin the estimion process by selecting an anchor, from which
they then adjust to arrive at a final estimate. Such adjustment, however, tends to be
insufficient, with the result that final estimates are biased in the direction of the
initial estimate. Most detrimental to the accuracy of final estimates, however, is the
tendency for people to choose anchors because they are handy rather than because
they are relevant (Bazerman, 1990).
Anchoring is a robust phenommn that has been o b w ~ e in d many domains and
tasks, including assessing p r ~ i 1 k i e s(Edwards, Lindman, & Phillips, 1965;
Lopes, 1985, 1987; Peterson & DuCharme, 1967; Wright & Andemon, 1989),
making predictions based on historical data (Sniezek, 19881, making utility assess-
ments (Johnson & Schkade, 1988; Shanteau & Phelps, 1979), exercising clinical
judgment (Friedlander & Stockman, 1983; Z u c k e m , Kwtner, Coldla, & Alton,
1984), infemng causal attributions (Quattrone, 1982),estimating confidenceranges
(Block & Harper, 1991), malung accounting-relatedjudgments (Butler, 1986).goal
setting (Mano, 1990), malung motivation-related judgments (Cervone & bake,
1986; Switzer & Sniezek, 1991), belief updating and change (Einhorn & Hogarth,
1985; Hogarth & Einhorn, 1989), evaluating product bundles (Yadov, 1994), and
determining listing prices for houses (Northcraft & Neale, 1987).
ANCHORING EFFECTS 49
to adjust them sufficiently when arriving at final estimates. This effect occurs even
when people are provided with monetary incentives to make accurate estimates
(Kahneman, Slovic, & Tversky, 1982).
We suggest that interviewers will be susceptible to anchoring effects in part
because of evidence from the literature on performance appraisal indicating that
evaluations are often biased in the direction of previous evaluations (Smither,
Reilly, & Burden, 1988). Past evaluations serve as an anchor for current evaluations
that are only partially revised in the face of new evidence (Murphy, Balzer,
Lockhart, & Eisenman, 1985).
Anchoring effects are distinct from other sources of bias, such as contrast effects.
Both Wexley, Sanders, and Yukl(1973) and Latham, Wexley, and Purse11 (1975)
found that a decision to hire an applicant based on a selection interview is made by
contrasting the person's qualifications with other applicants. Thus a person who
was shown to be a 5 (marginally acceptable) on a 9-point scale when evaluated
againstjob requirements was judged to be a 3 or lower when preceded by two highly
qualified applicants. Similarly, the same person was assessed as a 7 or higher when
preceded by two unqualified applicants. Anchoring effects, however, may occur
independent of the presence of other applicants.
This study investigated the resistance of the conventional structured interview
(CSI), the patterned behavior description interview (PBDI), and the situational
interview (SI) to bias due to reliance on the anchoring-and-adjustment heuristic.
When rating the quality of an interviewee's answer to an interview question,
interviewers are engaged in the type of task that can be accomplished through
anchoring and adjustment. Different interviewers, however, may be using different
anchors when judging the quality of a candidate's response. For example, inter-
viewers may employ what they consider to be an excellent answer as an anchor,
and then compare the candidate's answer to it. In contrast, interviewers may employ
what they consider to be a poor answer as the anchor. The rating of a candidate's
response will likely manifest the effect of anchoring and adjustment regardless of
the anchor used. The extent to which an interview technique is structured, however,
may minimize anchoring effects.
50 KATAOKA, LATHAM, WHYTE
STRUCTURED INTERVIEWS
The CSI typically consists of a series of job-related questions that are ptesented to
each candidate (Maurer & Fay, 1988). The questions focus on job responsibilities,
duties, job knowledge, and achievements in previous jobs, but they are not neces-
sarily based on a formaljob analysis. The validity of structured interviews,however,
is increased when they are based on a formal job analysis (Wiesner & Cronshaw,
1988). Two types of structured interview techniques that rely on a job analysis are
the SI (Latham, 1989) and the PBDI (Janz, 1989).
The SI is derived from goal setting theory and is based on the premise that
Downloaded by [Mr John Track] at 05:34 07 August 2016
intentions predict behavior (Locke & Latham, 1990).Interview questions using this
method are determined from the results of a job analysis using the critical incident
technique (Ffanagan, 1954). Job candidates are asked what they would do in
response to a series of job-related critical incidents. Each incident contains a
dilemma that is designed to elicit the candidates' intentions. A distinguishing
feature of the SI is that it provides a behavior-based scoring guide for interviewers
to use when evaluating candidates' responses.
The PBDI, in contrast, is based on the premise that the best predictor of future
behavior is past behavior. As with the SI, PBDI questions are based on the results
of a job analysis using the critical incident technique. Candidates are typicalty
presented with the criterion dimension of interest to the employer, are asked to
recall a relevant past incident, and to describe the actions that they took todeal with
it. A scoring guide is neither practical nor typically used with a PBDl because of
the wide variability in responses that are obtained in descriptions of each candi-
date's personal history, and it has been eschewed by Janz (1989).
In contrast to the CSI, the PBDI and the SI focus explicitly on behavior. For
example, with the PBDI, candidates are asked to describe a specific situation W t
occurred in the past, and to describe the actions that they took in response to it (Janz,
1989). With the SI, a specific situation is presented to candidates, who are then
asked to describe what action they would take if faced with that situation.
Research on structured interviews has focused primdly on issues of reliability
and validity (Janz, 1982;Latham & Saari, 1984; Lathm, Saari, Pursell, & Campion,
1980; Latham & Skarlicki, 1995; Orpen, 1985; Weekley & Gier, 1987). Relatively
little attention, however, has been paid to the issue of freedom from bias (Harris,
1989). This is surprising, given that bias attenuates both reliability and validity
(Thorndike, 1949).
Four studies have investigated bias in the SI. Maurerand Fay (1988) investigated
the ability of interview structure and interviewer training to reduce the effects of
errors such as halo, contrast, similar-to-me, and first impressions on rating variabil-
ity. Even though no training effect was found, greater agreement was found amang
the ratings obtained from the SI than from those obtained with the CSI. These
findings suggest that the SI IS more robust to the effects of bias than the CSI.
ANCHORING EFFECTS 51
Lin, Dobbins, and Farh (1992) investigated bias due to similarities between
interviewers and interviewees in terms of race and age. Stronger same-race effects
were found for the CSI than for the SI. Neither the CSI nor the SI were affected by
age similarity. In a study of police officers, Maurer and Lee (1994) found that the
SI minimized contrast effects on the accurate assessment of information provided
by multiple candidates.
Only one study has investigated the resistance of the PBDI to bias. Latham and
Skarlicki (1996) examined the effectiveness of the CSI, the PBDI, and the SI in
minimizing the similar-to-me bias of francophone managers in Quebec. This bias
was not apparent when either the SI or the PBDI were used, but it did occur when
the CSI was used.
Downloaded by [Mr John Track] at 05:34 07 August 2016
INTERVIEW STRUCTURE
Compared to the SI and the PBDI, the CSI lacks structure (e.g., a scoring guide)
and a sole focus on behavior. As a result, the CSI has only a relatively loose
framework that interviewers can rely on to assess the quality of an interviewee's
responses. In contrast, the structure and behavioral focus of the PBDI and SI may
reduce the likelihood that an interviewer will rely on an inappropriate anchor in
assessing the responses of an interviewee. In relation to the PBDI and the SI, the
CSI therefore may be more susceptible to problems such as those caused by
anchoring effects. Inherent in the application of the SI is the use of a scoring guide.
The scoring guide increases the degree of structure in the SI relative to both the
CSI and the PBDI. Therefore the SI may be less susceptible to an anchoring
induced bias than are the other .two techniques. Biases are partly responsible for
disagreements among decisions made by different interviewers (Maurer & Fay,
1988), because bias affects different interviewers differently. The PBDI, which
typically lacks a scoring guide, has modest interrater reliability coefficients in the
range of 0.49 (Janz, 1989), whereas the SI has shown much higher coefficients,
ranging from 0.76 to0.96 (Latham, 1989).Thus, the hypotheses tested in this study
were as follows:
METHOD
Procedure
Video-taped simulated interviews, one for each interview type, were used to
maintain uniformity in both candidate's answers and behavior during the interview
(Ilgen, 1986). All participants in each interview condition watched the same
videotape showing a candidate being interviewed for a position as atakler in a bank.
This job was chosen because the job requirements are straightlFmatd (e.g.,
prioritize requests, handle complaints). Therefore, prior familiarityof theinterview-
ers with the job was not necessary.
To maintain consistency across interview type, the questions used in each
interview format were written to address the s m job dimension. To check for
consistency in this regard across interview type, organizkonail behavior doctoral
students (n = 5) were given the scripts for each of the three interviews and asked
whether they agreed that the questions tapped the same undmlying dimensions.
Answers were given using a 5-point Likert-type scale ranging from 1 (stmngly
disagree) to 5 (strongly agree). There was high agreement that the questions
represented similar dimensions across interview type (M = 4.4, SD = 0.5).
Candidate responses to interview questions were &SO developed. Average
answers, as opposed to either excellent or poor answers, were written for each of
the interview questions. Because the purpose of this research was to investigate the
effects of anchoring on interviewer ratings of candidate responses, this step would
potentially allow both the high and low anchor manipulations to demonstrate an
impact toward both ends of the rating scales.
The procedure followed to develop candidateresponsesto interviewerquestions
was similar to the one used by Maurer and Fay (1988). The answers were first
developed for the SI, and were written to be comparable to what would be described
ANCHORING EFFECTS 53
in the scoring guide as an average response. Care was taken so that the candidate
responses did not correspond verbatim to the behaviors referred to in the guide,
because this would rarely if ever happen in practice. After developing the answers
to the SI, comparable answers were then generated for the PBDI and the CSI.
Examples of comparable SI, PBDI, and CSI questions and their respective
answers are given here:
happen again. (3) Yes, I know-I didn't feel it went well either. What do you
suggest I do to improve? (5)Yes, I realized it didn't go well. I'd like to call
the client and follow up. But first, I'd like any suggestions you may have.
Answer: I would admit that the presentation went poorly, and I would try
to talk to others to get some advice that would help me to make a better
presentation the next time.
PBDI question: Tell me about a time when a client complained to your
boss that you were not prepared for a presentation that you made to the client?
What were the circumstances? What did you do? What was the outcome?
Who can I call to verify this information?
Answer: My boss asked me to make a presentation to a client about a
subject that I was not very familiar with. Unfortunately, during the presenta-
tion the client asked me some questions that I was unable to properly answer.
As a result, after the meeting the client complained to my boss about my
performance. My boss then called me into his office and told me about the
client's complaints. I admitted that the presentation went poorly, and I said I
was going to try to talk to others to get some advice that would help me to
make a better presentation the next time.
CSI question: How do you respond when your boss tells you that a client
has complained that you were unprepared for a presentation that you made
to the client?
Answer: I admit that the presentation went poorly, and say that I will try
to talk to others to get some advice that can help me to make better
presentations.
One video tape for each interview type was recorded using the same setting and
the same actor in the role of the job candidate to hold differences across conditions
constant. Only the candidate could be seen on tape. The interviewer could be heard
but not seen. The participants were asked to observe the videotape and evaluate the
candidate's answers. This procedure was modeled after one that is used by the
president of a bank to give final approval to the selection of tellers.
54 KATAOKA, LATHAM, WHYTE
To the greatest extent possible, uniformity in the answers was maintained across
each of the interview formats. Uniformity of answers across interview c d t i a n s
was confirmed by a one-way analysis of variance (APJOVA) on the msan ratings
of interviewee responses in the control conditions of each interview technique. No
statistically significant difference was obtainud in i n w i e w ratings regardless of
the interview technique used, F(2,61) = 0.48, p < .62.
Each participant received a booklet of experimental materials. The term booklet
denotes each of the nine unique sets of axperiaemtd mataids used in this study.
Each booklet contained a set of instructions; a questionnaire tailored to either the
CSI, PBDI, or SI formats; rating scales for each interview question r-r-rg fkam 1
Downloaded by [Mr John Track] at 05:34 07 August 2016
(poor) to 5 (good); the anchor manipulation (low, control, and high); and manipu-
lation check questions.
Anchor was manipulated in the followingway. A variation of this techmiqm has,
in a different context, successfully induced anchoringef%ects(e.g., Joyce & Diddle,
1981).
RESULTS
Dependent Variable
The overall rating received by the candidate was calculated as the:sum of the SCOT~S
assigned to each of the 10 interview questions. Thus, the candidate's total score
ANCHORING EFFECTS 55
could range from 10 to 50. Mean scores for each interview type and anchor
condition are shown in Table 1.
Manipulation Checks
After watching the interview and rating the candidate's answers, participants
completed single-item scales designed to investigate their perceptions of the
candidate's behavior during the interview. One-way ANOVAs on responses to the
scales revealed that the candidate's behavior was perceived consistently across
conditions. That is, there was no significant difference in how participants across
Downloaded by [Mr John Track] at 05:34 07 August 2016
interview type viewed the candidate in terms of enthusiasm, F(2, 187) = 0.70, p c
SO; friendliness, F(2, 187) = 2.06,p c .13;confidence, F(2, 187) = 0.07, p c 0.93;
concern, F(2, 187) = 2.33, p < 0.11; attention, F(2, 187) = 1.58, p c 0.21; and
sincerity, F(2, 187) = 1.43,p < .24.
Statistical Analyses
Planned comparisons were used to test the four hypotheses of this study. This
method focuses on smaller designs of interest extracted from the original factorial
design (Keppel & Zedeck, 1989). Rather than focusing on an overall omnibus F
test, this method was chosen because it allows the researcher to focus on meaningful
components of the design and directly test the specific hypotheses of the study. It
also reduces the possibility of committing a Type I error (Keppel, 1991).
To test the first hypothesis, analyses of simple effects of the anchoring manipu-
lation on each interview type were conducted. One-way ANOVAs revealed that
TABLE 1
Means and Standard Deviations for Overall Ratings
of Candidate Responses According to Interview Type
Anchor
Interview Type M SD n M SD n M SD n
Note. CSI = conventional structured inteniew; PBDI = patterned behavior description in-
terview; SI = situational interview.
56 KATAOKA, LATHAM, WHYTE
for all three interview types, anchor had a significant effect on interviewer ratings:
CSI, F(2,61) = 1 2 . 5 0 , <
~ .01; PBDI, F(2,59) = 6 . 4 4 , <
~ .01; SI, F(2,61) =4.38,
p < .02.Therefore, the first hypothesis was supported. The effect size for the S1,
however, is medium (w2 2 0.06; Cohen, 1977),whereas the effect sizes for the PBDI
and CSI are large (w2 2 0.15). w2 for the CSI, PBDI, and SI were 0.27,0.15, and
0.10, respectively. Figure 1 illustrates anchoring effects on interview ratings for the
three interview types investigated.
To further investigate the relative susceptibility of each interview type to
anchoring, E tests were conducted on the differencesbetweon the variance of ratings
pooled according to interview type. The variance of interviewer ratings was
Downloaded by [Mr John Track] at 05:34 07 August 2016
significantly less for the SI than for either the CSI, F(63.63) = 3.62, p c .01, or the
PBDI, F(63,61= 4.22,p c .01. Thedifference in the vdance of interviewer ratings
between the PBDI and the CSI, however, was not significant, F(61,63) = 1.l6, p
< .lo. These results suggest that ratings in the SI condition were less affected by
anchoring than ratings in the other interview conditions. The higher degree of
agreement among raters in the SI condition than in the CSI condition also replicates
the results obtained by Maurer and Fay (1988).
The second, third, and fourth hypotheses of this study suggested that an increase
in interview structure would decrease the effect of anchoring.Three 2 x 2 (Interview
Type x Anchor) ANOVAs were conducted. The first tested whether anchoring
,PBDI
27.5 1 L I I
LOW CONTROL HIGH
ANCHOR
FIGURE 1 Mean overall ratings by interviewers in low, control, and high anchor conditions.
Downloaded by [Mr John Track] at 05:34 07 August 2016
58 KATAOKA, LATHAM, WHYTE
effects were more pronounced in the ratings of interviewers using the CSI compared
with those using the PBDI for both high and low anchor conditions. An answer to
this question can be determined with regard to the existence of a significant
Interview Type x Anchor interaction effect (Keppel, 1991). The interaction test
shows whether the simple effects of anchoring may be considered the same (no
interaction) or different (interaction).The results revealed that the interaction term
was not significant,thus indicating no significant difference between the PBDI and
the CSI in terms of resistance to the effects of anchoring, F(l,82) = 0.26, p < .61.
Hypothesis 2 was therefore rejected.
The second 2 x 2 (Interview Type x Anchor) ANOVA compared the ratings
Downloaded by [Mr John Track] at 05:34 07 August 2016
obtained in the S1 condition with those obtained in the CSI condition for both high
and Iow anchor conditions. The results in this case revealed a significantinteraction
effect, F(1, 83) = 8.91, p < .01. These results indicate that although the SI is still
susceptibleto anchoring effects, it was more resistant than was the CSI. Hypothesis
3 was thus supported.
The third 2 x 2 (Interview Type x Anchor) ANOVA compared the ratings
obtained in the SI condition with those of the PBDI. The results revealed a
marginally significant interaction, F(l, 79) = 3.19, p < 0.07, suggesting that the SI
is more resistant to anchoring effects than the PBDI (Hypothesis 4).
The results of the analyses of interactions between the different types of
interviews are shown in Figure 2.
DISCUSSION
This study showed that interviewers, when rating the responses of job candidates
to interview questions, are susceptible to anchoring effects when employing struc-
tured interview techniques. Candidate ratings were biased in the direction of the
anchor provided to the interviewer, regardless of the interview technique that was
used. This bias, however, was significantly less when the SI was used as compared
to the CSI and the PBDI.
The relative resistance of the SI to anchoring effects as compared with the PBDI
is likely attributableto the use of a scoring guide. Scoringguides may tend to reduce
anchoring effects because the behaviors appearing on the guides are themselves
"anchors" designed to assist raters in making judgments. These referents may serve
to decrease the likelihood of inappropriate anchoring. That anchoring &fx:ts from
irrelevant sources were not entirely eliminated by the use of a scoring guide,
however, is testimony to the strength of what is clearly a robust phenomenon
(Bazerman, 1990).
This study contributes to knowledge in three ways. First, it links the selection
interview literature with the literature on individual decision making in a way that
increases our understanding of the selection interview, while adding to our under-
ANCHORING EFFECTS 59
source of bias, the anchoring heuristic, can be reduced by the use of a scoring
guide.
Third, the study has practical implications for managerial behavior. It dernon-
strates that a job candidate may be judged more favorably than justified because
the interviewer is using a high anchor when rating candidate responses to interview
questions. The result in such a case could be an inappropriate hiring decision.
Similarly, a decision not to hire a suitable candidate could result if the interviewer
uses a low anchor to assess answers to questions asked during the interview process.
In either case, the result is negative for the organization.
Although one could question the extent to which high motivation to make
accurate decisions affects the anchoring process, evidence suggests that the intro-
duction of substantial incentives to make accurate choices does not eliminate
systematic errors of judgment (e.g., Grether & Plott, 1979; Slovic & Lichtenstein,
1983). Incentives narrow attention and increase deliberation (Tversky & Kahne-
man, 1986).Because people rely unconsciously on anchors to make estimates under
uncertainty (Tversky & Kahneman, 1974),it is not at all clear how incentives would
reduce anchoring effects. The external validity of the present findings is therefore
an issue for further research.
Another potential limitation of this study is the fact that each participant saw
only one videotape of a single candidate.Although the participants had considerable
years of work experience, a stronger test of the hypotheses would involve a field
study that included different candidates for differentjobs.
Future research should examine the extent to which anchoring effects influence
the outcome of different structured interview techniques when interviews are
conducted and decisions are made by a panel. Examination of the extent to which
group decision making may amplify or reduce the effect of heuristics on interviewer
ratings should prove to be interesting and worthwhile (e.g., Argote, Seabright, &
Dyer, 1986; Whyte, 1993).
The extent to which structured selection interviews are free from bias has
received little research attention. To further explore this question, this study
proposed that even structured selection interviews can be understood as fertile
60 KATAOKA, LATHAM, WHYTE
ground for the occurrence of the cognitive biases that characterize individual
decision making. These biases potentidly reduce the reliability and validity of the
selection interview. Through an understanding of the causes and cossequcnces of
these biases, the employment interview as a selection device can be improved.
ACKNOWLEDGMENT
REFERENCES
Argote, L.,Seabright, M. A., & Dyer, L. (1986). Individual versus group use of base-rate and
individuating information. O r g m i z a i o w i Behuvlor und Human Decision Prucesses, 38, 65-75.
Bazerman, M. H. (1990). Judgment in manugerial decision nruking. New York: Wiley.
Block, R. A., & Harper, D. R. (1991). Overconfidence in estimation: Testing the anchoring and
adjustment hypothesis. Organizational Behavior cuad Human Decisiurp Processes, 49, 188-207.
Bobko, P., Shetzer, L., & Russell, C. (1991). Estimatingthe statbdvd deviation of professocs' worth:
The effect of frame and presentation in utility analysis. Journai of Occupational Psychology, 64,
179-188.
Butler, S. A. (1986). Anchoring in the judgmental evaluation of audit samples. Accounting Review, 61,
101-1 11.
Campion, M. A., Pursell, E. D., & Brown, B. K.(1988). S t r u c t W interview: Raisingthe p y c h o ~ c
properties of the employment interview. Personnel Psycholugy, 41, 25-42.
Cewone, D.,& Peake, P. K. (1986). Anchoring, efficacy, and action: The influence of juQmiuai
heuristics on self-effxacy judgments and behavior. Journai of Personality Md Social Psychology,
SO, 492-50 1.
Cohen, J. (1977). Statisticul power unulysis jor the behuvwrd sciences. New York: Academic.
Cronshaw, S. F., & Wiesner, W. H. (1989). The validity of the employment interview: Models for
research and practice. In R. W. Eder & F. R. Ferris (Eds.), The employment interview: Thcory,
research, andpructlce (pp. 269-281). Beverly Hills Sage.
Davis, H. L., Hoch, S. J., & Rogsdnle, E. K. (1986). An anchoring and adjwtmcnt model of spousal
prediction. Journal c!f Consumer Research, I3,25-37
Eder, R. W., Kacmar, K. M., & Ferris, G. R. (1989) Employment interview research: History and
synthesis. In R. W. Eder & F. R. Fems (Eds.), The employment interview: Theory, research, and
practice (pp. 17-31). Beverly Hills: Sage.
Edwards, W., Lindman, H., & Phillips, L. D. (1965). Emrging technologies for rmlrieg decisions.In
T. M.Newcomb (Ed.), New directions in psyctwlo~y11(pp. 261-325). New York: Holt, Rinehart
&Winston.
Einhorn, H. J., & Hogarth, R. M. (1985). Ambigu~tyand uncertainty in probabilistic inference.
Psychological Review, 92, 465-461.
Flanagan,J. C. (1954). The cr~ticalincident technique Psychulogccd Bullenn. 51, 327-358.
Friedlander, M. L., & Stockman, S. J. (1983). Anchoring and publicity effects in clinical judgment.
Journal of Clinical Psychology, 39, 637-643.
Gordon, M. E., Slade, L. A , & Schmin, N. (1986) The science of the sophomore revisited: From
conjecture to empiricism.Acudemy ofManagement Review, 11, 191-207.
ANCHORING EFFECTS 61
Grether, D. M., & Plott, C. R. (1979). Economic theory of choice and the preference reversal
phenomenon. Americun Economic Review, 69, 623438.
Harris, M. M. (1989). Reconsidering the employment interview: A review of recent literature and
suggestions for future research. Personnel Psychology, 42, 691-726.
Hogarth, R. M. (1980). Judgment and choice. New York: Wiley.
Hogarth, R. M., & Einhorn, H. J. (1989). Order ejfects in belief updnting: The belief adjustment model.
Working paper, Center for Decision Research, University of Chicago.
Huber, V. L., & Neale, M. A. (1986).Effects of cognitive heuristicsand goals on negotiator performance
and subsequent goal setting. Organizution Behavior and Human Decision Processes, 38,342-365.
Huber, V . L., Northcraft, G. B., & Neale, M. A. (1990). Effects of design strategy and number of
openings on employment selection decisions. Organizational Behavior and Human Decision
Processes, 45, 276-284.
Huffcutt, A. I., & Arthur, W. (1994). Hunter and Hunter (1984) revisited: Interview validity for
Downloaded by [Mr John Track] at 05:34 07 August 2016
Locke, E. A., & Latham, G. P. (1990).A theory ofgorrl setring wtd tmkperformance. Englewood Cliffs,
NJ: Prentice-Hall.
Lopes, L. L. (1985). Averaging rules and adjustment processes in Bayesian inference. Bulletin ojthe
Psycho~u)mtcSociety, 23, 509-5 12.
Lopes, L. L. (1987). Procedural debiasing. Acra Psychok~gicu,64, 176185.
Mano, H. (1990).Anticipnted deadline penalties: BffvMs on gad levels and mk performance. In R. M.
Hogarth (Ed.), Insights in decision making @p. l7St76). Chicago: University of Chicago Rcss.
Maurer, S. D., & Fay, C. (1988). Effects of siNational interviews, c o n v e n t i d stmewed interviews,
and training on interview rating agreement: An experimental slnalysis Personnel Psychalogy, 41,
329-344.
Maurer, S. D., & Lee, T. W. (1994). Toward aresolution of contrast error in the employlneat interview:
A test of the situational interview. In D. P. Moore (Ed.), Academy of Management Best Papers
Downloaded by [Mr John Track] at 05:34 07 August 2016
Weekley, J. A., & Gier, J. A. (1987). Reliability and validity of the situational interview for a sales
position. J o u m l of Applied Psychology, 72, 484-487.
Wexley, K. N., Sanders, R. E., & Yukl, G. A. (1973). Training interviewers to eliminate contrast effects
in employment interviews. Journul vfApplied P.rychology, 57, 233-236.
Whyte, G. (1993). Escalating commitment in individual and group decision making: A prospect theory
approach. Orguniwtionul Behavior und Human Decision Processes, 54.430455.
Wiesner, W. H., & Cronshaw, S. F. (1988). A meta-analytic investigation of the impact of interview
format and degree of structure on the validity of employment interview. J o u r ~ ofl Occupational
Psychology, 61,275-290.
Wright, W. F., &Anderson, U. (1989). Effects of situation familiarity and financial incentives on the
use of the anchoring and adjustment heuristic for probability assessment. Organizational Behavior
and Human Decision Processes, 44, 68-82.
Yadov, M. S. (1994). How buyers evaluate product bundles: A model of anchoring and adjustment.
Downloaded by [Mr John Track] at 05:34 07 August 2016