Sunteți pe pagina 1din 26

SMALLPhilips Devine, GROUP / COGNITIVE RESEARCH ABILITY / October 2001 IN TEAMS

DO SMARTER TEAMS DO BETTER A Meta-Analysis of Cognitive Ability and Team Performance


DENNIS J. DEVINE
Indiana UniversityPurdue University, Indianapolis

JENNIFER L. PHILIPS
University of Akron

This study reports the results of several meta-analyses examining the relationship between four operational definitions of cognitive ability within teams (highest member score, lowest member score, mean score, standard deviation of scores) and team performance. The three indices associated with level yielded moderate and positive sample-weighted estimates of the population relationship (.21 to .29), but sampling error failed to account for enough variation to rule out moderator variables. In contrast, the index associated with dispersion (i.e., standard deviation of member scores) was essentially unrelated to team performance (.03), and sampling error provided a plausible explanation for the observed variation across studies. A subgroup analysis revealed that mean cognitive ability was a much better predictor of team performance in laboratory settings (.37) than in field settings (.14). Study limitations, practical implications, and future research directions are discussed.

Cognitive ability is the capacity to understand complex ideas, learn from experience, reason, problem solve, and adapt (Neisser et al., 1996; Sternberg, 1997). After hundreds of empirical studies, it is now clear that cognitive ability is one of the best predictors of individual job performance (Hunter & Hunter, 1984; Schmidt & Hunter, 1998; Wagner, 1997). Given that many tasks performed by small groups or teams involve learning, reasoning, and problem solving, it seems likely that the cognitive ability of team members is related to team performance. However, the issue is not as straightAUTHORS NOTE: We thank Bill Rogers, Eric Sundstrom, and Jane Williams for their helpful comments on earlier versions of this article and Marc Fogel for his assistance in collecting and coding data.
SMALL GROUP RESEARCH, Vol. 32 No. 5, October 2001 507-532 2001 Sage Publications

507

508

SMALL GROUP RESEARCH / October 2001

forward as intuition suggests. It has been known for some time that relationships at one level of analysis do not necessarily hold at another (Kozlowski & Klein, 2000; Robinson, 1950; Thorndike, 1939). Klein, Dansereau, and Hall (1994) offered a classic example of this phenomenon in discussing the relationship between popular votes and outcomes in U.S. presidential elections. At the state level, the candidate receiving the most popular votes wins all the electoral votes (i.e., winner takes all); however, as highlighted by the 2000 U.S. presidential election, the winner at the national level does not necessarily receive the most popular votes in the country as a whole. Essentially, the relationship between popular votes and election outcome differs across levels. Assuming that the statelevel relationship holds at the national level would be akin to making a cross-level fallacy (Rousseau, 1985). In a similar fashion, as far as work groups are concerned, it is not appropriate to assume a relationship between cognitive ability and performance at the team level based on studies conducted at the individual level. In other words, the strong positive relationship between cognitive ability and performance at the individual level does not cause (or imply) a relationship at the team level. Consider, for example, a decision-making team composed of members from several functional areas of an organization. Individually, the members may be intelligent and perform their specific roles well. However, this does not ensure that the team as a whole will do well. For instance, differences in perspective associated with the various functional areas may prevent the effective integration of relevant information (Hinsz, Tindale, & Vollrath, 1997; Larson & Christensen, 1993). In essence, the existence of a team-level relationship between cognitive ability and performance is an empirical issue: It could be positive, negative, nonexistent, or variable from situation to situation. With the growing use of work groups and teams in organizations, it would be beneficial for practitioners to identify valid, low-cost, practical predictors of team performance. Cognitive ability tests may prove useful in this regard, but their value cannot be assumed.

Devine, Philips / COGNITIVE ABILITY IN TEAMS

509

Complicating assessment of the relationship between cognitive ability and performance in work groups is the variety of ways in which cognitive ability can be operationally defined at the team level. In the empirical literature on small groups, three indices associated with the functional amount (i.e., level) of cognitive resources available to the team have appeared most often: (a) the value of the teams highest scoring individual, (b) the value of the teams lowest scoring individual, and (c) the mean of team member scores. These three operational definitions correspond to three task types identified by Steiner (1972), involving different functional relationships between individual and group performance: (a) tasks where the best individual performance determines the groups performance, (b) tasks where the worst individual performance determines the groups performance, and (c) tasks where all individual performances contribute to group performance in a summative fashion. Steiner labeled the first type of task disjunctive, the second type conjunctive, and the third type additive. In addition to these three operational definitions associated with the functional level of cognitive ability present in a work group, the rise of diversity issues in the workplace has called attention to the potential influence of team-level variation on team dynamics and effectiveness (Jackson, 1996). This line of work suggests a fourth way of treating cognitive ability at the team level: in terms of the dispersion of member scores. Underlying much of the work on diversity are the notions that team member diversity is a good thing and that heterogeneous groups should outperform homogeneous groups (especially on tasks requiring creativity or innovation) because they possess a larger pool of task-related resources (K. Y. Williams & OReilly, 1998). Although there has been little systematic discussion concerning the impact of team member diversity with regard to cognitive ability, a number of recent studies have nonetheless examined the association between the standard deviation of member cognitive ability scores and team performance. From a practical perspective, it would be advantageous to learn if the four operational definitions vary with regard to their criterion-

510

SMALL GROUP RESEARCH / October 2001

related validity. To the extent they do, this would suggest strategies for composing work groups. For example, if the best predictor of team performance across a variety of tasks turned out to be the score of the most intelligent member, this would suggest ensuring that all work groups have at least one very intelligent team member. Conversely, if mean cognitive ability score was found to be the best predictor of team performance, efforts could be made to select as many intelligent members as possible without excessive concern over finding a single genius. Fortunately, although research on cognitive ability in work groups is a relatively recent phenomenon, there is now enough data to make a preliminary assessment of the team-level relationship using meta-analytic methods. In the next section, we report the results of several meta-analyses concerning the relationship between teamlevel cognitive ability and team performance using four operational definitions of team-level cognitive ability (i.e., mean member score, highest member score, lowest member score, and standard deviation of scores). Given the strength and robustness of the individual-level relationship, our primary research hypothesis was as follows: Team-level indices reflecting the highest, lowest, and mean cognitive ability scores within work groups will each be positively related to team performance. Furthermore, given the critical role of member interdependence in most work groups, it seems likely that the performance of few (if any) work groups would be determined solely by the actions of a single member as implied by Steiners (1972) disjunctive and conjunctive task types. Even in work groups where one member is clearly most important, it is difficult to imagine that performance is not affected to some degree by other members. Therefore, we expected that the observed relationship between team-level cognitive ability and performance would be stronger when indexed by the mean member score as opposed to the highest or lowest score on the team. Given the lack of theoretical development in the literature, we did not have any formal expectations with regard to the relationship between the dispersion of cognitive ability scores and team performance.

Devine, Philips / COGNITIVE ABILITY IN TEAMS

511

METHOD
LITERATURE SEARCH

Before any meta-analysis can be conducted on antecedents of team performance, it is necessary to address the issue of whether work groups and teams will be treated as the same thing. Although persuasive arguments have been made for treating them separately, the distinction is relatively recent, and the two terms have been used synonymously throughout much of the literature on small groups (Guzzo, 1996; Ilgen, 1999). Furthermore, in any event, preliminary inspection revealed that few empirical reports described the study context well enough to make fine distinctions. As a result, we use the terms group, work group, and team interchangeably in reference to small collectives of individuals that interact for the purpose of accomplishing one or more shared goals while operating with some degree of interdependence. Several convergent methods were used to search for studies with usable data. First, we conducted a computerized search of the PSYCHINFO database for the years 1967 to 1999. The following terms (and relevant combinations) were used in the computerized database search: team, group, workgroup, cognitive ability, mental ability, general ability, ability, intelligence, performance, effectiveness, efficiency, productivity, and outcome. The authors also manually scanned the titles in each issue of the following journals for the past 10 years: Journal of Applied Psychology, Personnel Psychology, Academy of Management Journal, Journal of Management, Journal of Organizational Behavior, Organizational Behavior and Human Decision Processes, Small Group Research, Journal of Personality and Social Psychology, and Intelligence. Abstracts and method sections were consulted for any article possessing a title that suggested team-level research. Reference lists for recent theoretical and empirical papers and books on work group or team composition were also examined (e.g., Barrick, Stewart, Neubert, & Mount, 1998; Cohen & Bailey, 1997; Guzzo & Dickson, 1996;

512

SMALL GROUP RESEARCH / October 2001

Ilgen, 1999; Milliken & Martins, 1996). Given the small number of published studies on this topic, special emphasis was placed on acquiring unpublished data. In particular, after reviewing abstracts, we obtained several unpublished theses and dissertations through interlibrary loan; two of these eventually proved useful (Blades, 1976; OConnell, 1994). We also generated a list of researchers known to conduct research on groups and teams and contacted these individuals via e-mail. Data from four studies (i.e., Gully, 1997; Hollenbeck et al., 1999; Sundstrom & Futrell, 1999; Zukin, 1999) were obtained in this fashion.
INCLUSION CRITERIA

A study was included in the meta-analysis if it did each of the following: (a) measured individual-level cognitive ability and formed one or more team-level indices using some explicit aggregation process, (b) measured team performance, and (c) empirically assessed the degree of association between team-level cognitive ability and team performance using a Pearsons r correlation. The following were considered acceptable measures of cognitive ability: (a) scores from an established measure of cognitive ability (e.g., Wonderlic Personnel Test, 1992) or (b) general aptitude (e.g., American College Test Verbal, Scholastic Assessment Test Quantitative) or multiaptitude test scores (e.g., composite scores on the General Aptitude Test Battery or Differential Aptitude Test). Although scores from an established measure of cognitive ability are obviously preferable, general or composite aptitude scores tend to be highly correlated with measures of cognitive ability because they measure one or more primary components (Wagner, 1997). Team performance was defined as the degree to which the team accomplished its goal or mission. We initially hoped to conduct separate analyses for accuracy and quality criteria and speed and quantity criteria, but only three studies were found that reported associations between measures of team-level cognitive ability and productivity (i.e., Neuman & Wright, 1999; OConnell, 1994; Sundstrom & Futrell, 1999), so this was not possible. When multiple criteria were reported (e.g., production counts and subjec-

Devine, Philips / COGNITIVE ABILITY IN TEAMS

513

tive ratings), we chose the one that best appeared to capture overall accomplishment of the teams mission. When subjective performance measures were available for both supervisors and team members, we used supervisor ratings because of (a) their common use in administrative decision making in organizations, (b) their availability in each study, and (c) their tendency to exhibit more variability than team member ratings. Thus, although both supervisors and team members can provide useful information on team performance related to their unique perspectives, we chose to use supervisor ratings because of their greater salience, availability, and variability relative to aggregated team member self-ratings. All studies appearing to be relevant were read in entirety by one of the authors, and a final judgment was then made as to whether the study provided usable data. The majority of empirical studies identified in the computerized search were discarded because they did not analyze (or, in a few cases, report) the relationship between cognitive ability and performance at the team level. Because of our focus on general cognitive ability, we did not include studies solely involving team-level indices of task-specific ability. These studies used individual performance scores on some task as the basis for constructing a group-level index of task-specific ability, often as a benchmark for determining process gain or loss associated with group performance on the same task. We also did not include studies that used a domain-specific aptitude test score instead of a measure of general cognitive ability (e.g., Gurnee, 1937; Kabanoff & OBrien, 1979). Finally, we wish to note that we were unable to obtain usable data from three well-known studies of group composition (i.e., Spector & Suttell, 1957; Terborg, Castore, & DeNinno, 1976; Tziner & Eden, 1985) because cognitive ability scores were dichotomized to allow formation of teams with varying degrees of heterogeneity but no quantitative values at the team level were calculated and/or reported. In total, our review of the literature yielded 19 studies that met all three inclusion criteria; Table 1 provides descriptive summaries of each. Several studies reported data from several related experiments or settings (i.e., Blades, 1976; OBrien & Owens, 1969; Sundstrom & Futrell, 1999), resulting in a total of 25 independent

514 TABLE 1: Descriptive Information for Studies Included in One or More Meta-Analyses Study Barrick, Stewart, Neubert, and Mount (1998) Blades (1976) (1 and 2) Bottoms (1998) Brandt (1998) Clayton (1998) Colarelli and Boos (1992) Devine (1999) Fiedler and Meuwese (1963) Gully (1997) Hollenbeck et al. (1999) Task Small appliance and electronic assembly; fabrication and maintenance Operating an army mess hall SouthEast Airlines top management simulation Energy International top management simulation SouthEast Airlines top management simulation Evaluation of a personnel program SouthEast Airlines top management simulation Operating antiaircraft artillery guns TEAM TANDEM military command and control simulation Dynamic distributed decision-making military command and control simulation 2 TIDE military command and control simulation Human resource problem solving Cognitive Ability Measure Wonderlic Personnel Test Henman-Nelson Mental Ability Test Wonderlic Personnel Test Wonderlic Personnel Test Wonderlic Personnel Test Composite of global ACT and GPA Wonderlic Personnel Test AGCT scores Wonderlic Personnel Test Wonderlic Personnel Test Performance Measure Supervisor rating of overall effectiveness (sum of 8 dimensions) Average of supervisor ratings (two raters) Profit (from algorithm) Time taken to identify correct candidate for position Profit (from algorithm) Average score on course project (two raters) Profit (from algorithm) Commander rankings Quality and quantity of target aircraft classification Computer-generated performance score LePine, Hollenbeck, Ilgen, and Hedlund (1997) Neuman and Wright (1999) ACT/SAT Thurstone Test of Mental Alertness Accuracy of target aircraft classification Supervisor ratings (from three raters)

OBrien and Owens (1969) (1) Collective letter writing for recruitment OBrien and Owens (1969) (2) Constructing a chart OBrien and Owens (1969) (3) Generating a short fictional story OBrien and Owens (1969) (4) Generating a short fictional story OBrien and Owens (1969) (5) Generating a short fictional story OConnell (1994) Stevens, Jones, Fischer, and Kane (1999) Sundstrom and Futrell (1999) Tziner and Vardi (1983) W. M. Williams and Sternberg (1988) Zukin (1999) Automotive construction, paint and assembly; maintenance Manual/technical jobs Automotive headlamp assembly Operating military tanks Brainstorming Solving murder mystery

AGCT AGCT ACT-English ACT-English ACT-English GATB composite SRA verbal subtest IRT/DAT composite IDF composite Henmon-Nelson Test of Mental Ability Wonderlic Personnel Test

Composite rating of quality (based on 5 dimensions) Number of errors made Composite ratings of quality (based on 5 dimensions) Composite ratings of quality (based on 5 dimensions) Composite ratings of quality (based on 5 dimensions) Supervisor rating of overall performance (based on 6 dimensions) Supervisor ratings of effectiveness Number of units assembled correctly Commander ranking of effectiveness Overall rating of solution quality Correct answer (Yes or No)

NOTE: ACT = American College Test; GPA = grade point average; AGCT = Army General Classification Test; SAT = Scholastic Assessment Test; GATB = General Aptitude Test Battery; SRA = Science Research Associates Survey of Basic Skills; IRT = Industrial Reading Test; DAT = Differential Aptitude Test; IDF = Israeli Defense Force battery based on Otis-Lenon, command of Hebrew, and interview.

515

516

SMALL GROUP RESEARCH / October 2001

samples. The majority of studies occurred in a controlled setting (11 of 19) within the past 10 years (14 of 19).
CODING

Five quantitative variables were coded from usable studies: (a) number of teams involved, (b) correlation between the mean cognitive ability score within the team and team performance, (c) correlation between the highest cognitive ability score within the team and team performance, (d) correlation between the lowest cognitive ability score within the team and team performance, and (e) correlation between the standard deviation of member cognitive ability scores and team performance. We did not code reliability estimates for cognitive ability or team performance for several reasons. First, many studies did not report reliability estimates for either individuallevel cognitive ability or team performance. Second, cognitive ability measures tend to be very reliable, and many studies used objective performance indices (i.e., counts or standardized algorithms) that can be assumed to have near perfect interrater reliability (Mento, Steel, & Karen, 1987). Third, in many cases, correction for measurement error has a minor impact on parameters estimated via meta-analysis (Koslowsky & Sagie, 1994). Table 2 contains effect-size information for each independent sample for the four operational definitions of team-level cognitive ability examined in this study. Effect sizes ranged from strong and positive to weak and negative. Sample sizes varied from a low of 16 (OBrien & Owens, 1969) to a high of 514 (Sundstrom & Futrell, 1999), with the average study involving 71 teams. With regard to operational definitions, 24 samples reported a correlation between the mean cognitive ability score of team members and team performance (i.e., all but Neuman & Wright, 1999), 16 samples provided a correlation between the highest member score and team performance, 17 samples provided a correlation between the lowest member score and team performance, and 9 provided a correlation between the standard deviation of scores and team performance.

Devine, Philips / COGNITIVE ABILITY IN TEAMS

517

TABLE 2: Coding Information for Studies Included in Meta-Analyses rxy Study Barrick, Stewart, Neubert, and Mount (1998) Blades (1976) (1) Blades (1976) (2) Bottoms (1998) Brandt (1998) Clayton (1998) Colarelli and Boos (1992) Devine (1999) Fiedler and Meuwese (1963) Gully (1997) Hollenbeck et al. (1999) LePine, Hollenbeck, Ilgen, and Hedlund (1997) Neuman & Wright (1999) OBrien & Owens (1969) (1) OBrien & Owens (1969) (2) OBrien & Owens (1969) (3) OBrien & Owens (1969) (4) OBrien & Owens (1969) (5) OConnell (1994) Stevens (1999) Sundstrom and Futrell (1999) (1) Sundstrom and Futrell (1999) (2) Tziner and Vardi (1983) Williams and Sternberg (1988) Zukin (1999) N 51 M .23 HIGH .03 . . .24 .14 .07 . .19 . . . .37 . .48 .12 .32 .15 .19 .07 . .30 .26 . .65 .26 LOW .02 . . .08 .17 .21 . .38 . . . .30 .33 .56 .12 .49 .04 .56 .16 . .28 .34 . .43 .15 SD .22 . . . .05 .15 . .09 . . . . . . . . . . .16 .20 .10 .05 . . .09 Setting Field Field Field Lab Lab Lab Field Lab Field Lab Lab Lab Field Lab Lab Lab Lab Lab Field Field Lab Lab Field Lab Lab

49 .02 51 .22 50 .22 a 54.(52) .32 55 .21 86 .24 52 .40 24 .19 81 .38 76 .26 26 .34 79 20 20 16 16 16 118 56 117 514 115 24 62 . .58 .13 .52 .03 .52 .13 .29 .43 .40 .30 .65 .26

NOTE: N = number of teams included in the study. M = arithmetic mean of member scores; HIGH = score of the highest scoring member; LOW = score of the lowest scoring member; SD = standard deviation of member scores. a. N = 54 for M and SD, and N = 52 for the HIGH and LOW.

PROCEDURE

We conducted four main analyses corresponding to the following operational definitions of team-level cognitive ability: (a) arithmetic mean of member scores (M ), (b) score of the lowest scoring member (LOW), (c) score of the highest scoring member (HIGH),

518

SMALL GROUP RESEARCH / October 2001

and (d) standard deviation of member scores (SD). For each main analysis, a sample-weighted mean correlation was calculated using the corresponding distribution of effects along with the following statistics: (a) chi-square test for the homogeneity of observed effectsizes (Rosenthal, 1991), (b) percentage of observed variance accounted for by sampling error (Hunter & Schmidt, 1990), (c) 95% credibility interval formed around the estimated population parameter using the corrected standard deviation (Hunter, Schmidt, & Jackson, 1982), and (d) 95% confidence interval formed around the estimated population parameter based on the standard deviation of the sampling distribution (Whitener, 1990). The first three statistics were used to evaluate the likelihood that the cognitive abilityperformance relationship is moderated at the team level. The existence of one or more moderators is suggested when (a) the chi-square test for coefficient homogeneity is statistically significant, (b) the percentage of observed variance accounted for by sampling error is less than 75%, and (c) the credibility interval is large and/or includes zero. Conversely, the 95% confidence interval assesses the amount of sampling error influencing the sample-weighted mean correlation. When all of the observed variation in a distribution of coefficients is accounted for by sampling error, the credibility interval will be zero, and the confidence interval will reflect the sampling error affecting the estimate of the one true population relationship. When sampling error accounts for a relatively small portion of the observed variation in a set of coefficients, the credibility interval will be nonzero, and the confidence interval will provide a measure of the sampling error inherent in the estimate of the mean relationship across multiple subgroups. Hunter and Schmidt (1990) noted that support for a particular moderator exists when there is (a) a meaningful difference in the estimated effect size across moderator subgroups and (b) reduced within-subgroup variation relative to the overall distribution (i.e., a high percentage of variation accounted for by sampling error, a nonsignificant chi-square, and a small credibility interval that does not include zero).

Devine, Philips / COGNITIVE ABILITY IN TEAMS

519

RESULTS

Results of the four main analyses are presented in Table 3. As is evident from inspection of the table, the M operational definition yielded the strongest estimated relationship with team performance (r = .294) followed by the LOW (r = .246) and HIGH (r = .208) indices, whereas the SD index produced a very weak and negative estimate (r = .026). Furthermore, 95% confidence intervals created around the sample-weighted means for all three operational definitions involving level did not include zero, suggesting observed criterion-related validity associated with these indices will generally be positive. However, the confidence intervals for all three operational definitions did overlap, so our expectation that the mean score would produce a stronger observed relationship received only limited support (i.e., from the point estimates). Overall, these data are consistent with the notion that the three operational definitions involving level have positive relationships with team performance, but none is clearly superior. In contrast, the 95% confidence interval for the SD index included zero, consistent with the notion that there may well be no relationship between the dispersion of member cognitive ability scores and team effectiveness. Table 3 also contains evidence that other variables moderate the strength of these team-level relationships. Specifically, the chisquare tests for homogeneity in the observed coefficients associated with the M, HIGH, and LOW analyses were statistically significant, indicating that variability in the distribution of coefficients was significantly greater than what would be expected by chance. In addition, sampling error accounted for less than 59% of the observed variance in sample-weighted effect sizes for these three analyses, in all cases below the 75% value suggested by Hunter et al. (1982) as sufficient to conclude that one true effect characterizes the relationship in the population as a whole. Finally, 95% credibility intervals created around each sample-weighted mean correlation were relatively large for all three estimates and included zero for the LOW index. In light of this consistency, there is good reason to suspect that the magnitude of the team-level relationship

520 TABLE 3: Summaries for Main and Moderator Meta-Analyses


2

K Operational definition HIGH M SD LOW Study-setting moderator Lab Field 16 24 9 17 16 8

N 1,209 1,749 1,079 1,288 1,199 550

r xy .208 .294* .026 .246 .368* .137

% , SE 59.16 41.36 77.33 40.73 103.36 44.31

95% Credibility Interval .027, .389 .042, .545 .124, .071 .010, .505 .123, .339
2

95% Confidence Interval .138, .279 .227, .361 .086, .033 .166, .328 .317, .415 .013, .261

27.35* 55.87*** 11.45 39.54*** 15.65 17.63*

NOTE: K = number of samples in analysis; N = number of teams in analysis; r xy = mean sample-weighted correlation; = chi-square value for homogeneity of coefficients; % 2, SE = percentage of observed variance accounted for by sampling error; HIGH = score of the highest scoring member; M = arithmetic mean of member scores; SD = standard deviation of member scores; LOW = score of the lowest scoring member. *p < .05. **p < .01. ***p < .001.

Devine, Philips / COGNITIVE ABILITY IN TEAMS

521

between cognitive ability and performance varies across situations for all three operational definitions involving level. Considering the weak estimated relationship, the narrow confidence interval that includes zero, and the relatively large amount of observed variance accounted for by sampling error, it appears from these data that work group diversity in terms of cognitive ability is essentially unrelated to team performance in most situations. Given these data, we proceeded to look for potential moderators of the team-level relationship. We initially hoped to code the studies in our database for several task characteristics, but this proved impossible because (a) most studies provided only a cursory description of the task and (b) study characteristics that could be reliably coded from available information tended to be highly correlated, making it impossible to isolate their effects. In particular, studies conducted in field settings tended to involve standing teams engaged in familiar behavioral tasks with long time frames; conversely, lab studies tended to use novel intellectual tasks, short time frames, and ad hoc groups of students who had little familiarity with one another. As a result, we opted to code only one surface characteristic: study setting. Lab studies were defined as those that took place under controlled conditions wherein the focal task was created solely for the purpose of scientific investigation and had no intrinsic importance or meaningfulness. Field studies were defined as any study that did not meet the definition of a lab study. Furthermore, given the extremely small number of studies conducted in the field that reported correlation for the HIGH and LOW indices, we only examined the study-setting moderator using the distribution of coefficients associated with the M index. As seen in Table 3, the study-setting moderator provides a useful starting point for understanding the nature of the team-level relationship between cognitive ability and performance. With regard to the magnitude of the team-level relationship, the two subgroups exhibited a large difference: The estimated relationship was considerably stronger in lab studies (r = .37) than actual organizational settings (r = .14). Furthermore, all the observed variation in study coefficients associated with lab studies was explained by sampling error, and the chi-square test for heterogeneity was nonsignificant

522

SMALL GROUP RESEARCH / October 2001

(a strong test according to Hunter & Schmidt, 1990). The study-setting moderator did not fully account for within-category variation among the field studies, however. The observed variance explained by sampling error was only slightly greater for field studies (44%) than for all studies combined (42%), and the chi-square test for heterogeneity was statistically significant as well. Thus, the study-setting variable did yield a sizeable difference in the magnitude of the sample-weighted estimates of the relationship but resulted in one homogeneous subgroup and one heterogeneous subgroup with evidence of further moderation. Overall, there appears to be one population value characterizing the team-level relationship for mean scores in laboratory studies but one or more moderators influencing the relationship in field settings.

DISCUSSION
SUMMARY

The main and moderator analyses reported above are consistent with three general conclusions: (a) The M, HIGH, and LOW operational definitions of team-level cognitive ability are positively correlated with measures of team performance for most if not all tasks in the population; (b) for all three of these indices, the magnitude of the positive relationship is moderated by other variables; (c) mean cognitive ability is a much weaker predictor of team performance in organizations as opposed to the lab; and (d) the standard deviation of member cognitive ability scores generally appears to be unrelated to team performance. Before examining these conclusions in more detail, it is important to note that the number of coefficients (i.e., samples) is smaller than the typical individual-level meta-analysis. Although metaanalytic techniques can correct for the random effects of sampling error, they cannot address systematic bias resulting from unrepresentative sampling of the domain of tasks, methodologies, and contexts. Given the lack of a comprehensive framework identifying the effects of task characteristics, it is impossible to gauge the repre-

Devine, Philips / COGNITIVE ABILITY IN TEAMS

523

sentativeness of the studies included in these meta-analyses. Thus, these results should be viewed with caution, and parameter estimates should be treated as preliminary.
IMPLICATIONS

The positive relationship observed for all three operational definitions associated with cognitive ability level within a team suggests that the functional amount of cognitive ability in teams does indeed predict team performance across a broad variety of team contexts. All other things being equal, the cognitive ability of the most intelligent member accounts for roughly 4.3% of the variance in team performance, the cognitive ability of the least intelligent member explains about 6.1%, and the mean cognitive ability of team members captures approximately 8.6% of the variance in team performance (i.e., twice as much as the cognitive ability of the most intelligent member). Furthermore, in situations where it is desirable to predict how well a team will perform, it appears more valuable to know the mean level of cognitive ability of members than the score of the highest or lowest scoring individual. Perhaps the most important finding of this study, however, was the strong evidence of moderation affecting the team-level relationship for all three operational definitions associated with level. With regard to composing work groups, the main analyses indicate the relationship between mean cognitive ability and team performance varies across situations. Furthermore, the moderator analysis findings suggest that in organizational settings, the predictive efficacy of team-level cognitive ability will be weak in some team contexts and nonexistent in others. Overall, there is little evidence here to suggest that cognitive ability tests represent a panacea for practitioners charged with assembling effective work groups. This of course begs the question of which characteristics are responsible for the variation in the team-level relationships. Few empirical studies in the literature have examined potential moderators, but theory and empirical research suggest a potential role for the following: (a) task complexity, (b) degree of physical activity, and (c) task familiarity. Task complexity is a function of the number

524

SMALL GROUP RESEARCH / October 2001

of information cues and behavioral acts required of a task, as well as the coordination and adaptation required of members (Wood, 1986). Given that task complexity moderates the relationship between cognitive ability and performance for individuals (Hunter & Hunter, 1984), team-level indices of cognitive ability may be more strongly related to team performance on complex tasks than simple tasks. With regard to physical activity, intellectual tasks (e.g., planning, decision making, problem solving) differ from behavioral tasks (production, assembly, maintenance) in that the latter involve substantial movement and coordination of team members as well as their tools or equipment (McGrath, 1984). Although informationprocessing demands are certainly present in both types of tasks, the team-level cognitive ability should be more strongly related to team performance in intellectual tasks than behavioral tasks due to the strong influence of psychomotor and physical abilities in the latter. Finally, regarding task familiarity, research by Kanfer and Ackerman (1989) suggests the relationship between cognitive ability and task performance decreases over time for individuals as they acquire more experience with a task. Essentially, they argued that cognitive ability is important in the early stages of learning a new task but becomes progressively less important as knowledge is acquired and skills become proceduralized. Extending this argument to the team level, the correlation between team-level cognitive ability and team performance should be highest for novel tasks and should decrease over time (i.e., repetitions or cycles) as team members become more familiar with their roles and the roles of other members. Overall, to the extent several task moderators are operating, there will likely be situations where team-level cognitive ability indices are strongly correlated with team performance and other situations where the two are unrelated or perhaps even negatively related. Combining the empirical evidence of moderation and the theoretical discussion above, team-level cognitive ability might be expected to yield its lowest predictive validity for performance when team tasks are simple, familiar, and behavioral. Applying this logic to organizational settings, team-level cognitive ability may be only marginally useful for predicting the effectiveness of standing work teams engaged in repetitive, standardized activities (e.g., pro-

Devine, Philips / COGNITIVE ABILITY IN TEAMS

525

duction or assembly teams or maintenance crews). On the other hand, given the strong relationship found with intellectual tasks in the lab, team-level cognitive ability may be a better predictor of performance for ad hoc teams facing a relatively complex task with a finite life span (e.g., selection committees, research and development teams, or quality circles). Furthermore, there may be higher order interactions among the task characteristics such that the effect of one task characteristic depends on the level of another. For instance, the effect of task familiarity could be more pronounced in complex tasks as opposed to simple tasks. Finally, the results of the main analysis for the standard deviation index suggest that the dispersion of cognitive ability in teams is not an important concern for researchers or practitioners. It should be noted that the sample size for the SD analysis was the smallest of the four main analyses, but these studies did show reasonable heterogeneity with regard to study setting, and there was relatively little variation in the observed coefficients across studies. Combined with the lack of a compelling theoretical rationale for why the dispersion of cognitive ability should be important, it is unlikely that a substantial relationship exists between cognitive ability dispersion in teams and team performance in general. At the same time, depending on the nature of the task, it is conceivable that variation in member cognitive ability (or lack thereof) might be related to team performance in some situations.
FUTURE RESEARCH DIRECTIONS

Given the previous discussion, a good first step would be to identify the team contexts where the cognitive ability of team members is most strongly related to team performance. The hypotheses implied above could be tested in controlled settings by having teams engage in tasks that vary along key task dimensions, allowing examination of potential interactions among task characteristics and changes over time. Alternatively, given the complex contexts and multiple task responsibilities associated with real work groups, it might be possible to agree on a taxonomy of work group types found in organizational settings (e.g., Sundstrom, 1999) and

526

SMALL GROUP RESEARCH / October 2001

then examine the different types in field settings. This would sacrifice precision related to identifying operative characteristics in favor of greater realism and enhanced generalizability. Taken together, the two strategies could further our understanding of the micro task characteristics that affect the team-level relationship as well as the macro team contexts in which team-level cognitive ability is most important. In addition to identifying the conditions when it is most and least important to maximize the cognitive ability of work groups, it would also be helpful to identify particular roles where cognitive ability is most important once major team types have been distinguished. Borrowing Steiners (1972) emphasis on the criterion, it might be useful to use cognitive ability measures to predict individual role performances and then focus on building compilation models relating individual role performances to team performance in specific contexts (Kozlowski & Klein, 2000). In particular, member role performances could be represented by main effects and dyadic and triadic performance-related exchanges represented by higher order interaction terms (i.e., AB, ABC). Given the strong link between individual-level cognitive ability and job (i.e., role) performance, specific roles or role exchanges could be identified where cognitive ability is most important in particular team types with fairly standard roles (e.g., sports teams or top management groups). For instance, if a strong effect were obtained for the AB interaction term, this would suggest that the quality of the task-related exchange between Members A and B was strongly related to overall team performance. Given that cognitive ability is a good predictor of individual performance, a manager or executive could then try to ensure that two of the more intelligent members occupied these roles on the team. In addition to this focus on settings and specific member roles, practitioners could also benefit from more research on how team-level cognitive ability relates to team dynamics. In particular, it would also be useful to have a better understanding of how the cognitive ability of members affects team outcomes via process variables such as task-related communication, behavioral coordination, interpersonal conflict, encouragement, and performance

Devine, Philips / COGNITIVE ABILITY IN TEAMS

527

monitoring. To some extent, it may be possible to train or modify procedures in teams with low cognitive ability (or work groups with deficits in key positions) to provide some of the advantages of the more able teams. Future research could address this question by gathering team-level data on conflict and other process variables using behavioral checklists, expert ratings, or aggregated selfreports from team members themselves (Weingart, 1997). A final question that arises from this study is the impact of the criterion. Because of the focus in the existing literature, we were limited to examining the relationship between team-level cognitive ability and task-related performance. It would be interesting to learn if team-level cognitive ability indices are predictive of other outcomes such as collective efficacy, cohesion, and viability. To date, we know of no study that has looked at the team-level relationship between cognitive ability and any outcome other than taskrelated performance. Summary. Figure 1 presents a visual summary of the issues examined and raised in this study. The figure can be divided into two horizontal levels, the lower dealing with individuals and the upper dealing with teams as a whole. Starting at the lower left, cognitive ability is an individual difference characteristic that can vary among team members; team-level indices of the cognitive ability of members can be formed in various ways through an aggregation process and have the potential to be related to team-level processes, which in turn determine outcomes such as team effectiveness and viability. Task characteristics such as familiarity, complexity, and physical activity may influence the nature of the relationship between team-level cognitive ability and team performance in two ways: (a) by affecting the extent to which the cognitive ability of members affects team process and (b) by affecting the extent to which team-level process variables affect team outcomes. Alternatively, individual cognitive ability could be used to predict the role performances of members, and then the role performances of members (and their interactions) could be weighted (i.e., aggregated) in different ways to predict team outcomes.

528

SMALL GROUP RESEARCH / October 2001

Task Characteristics Complexity Physical Activity Familiarity

Team-Level Cognitive Ability LOW HIGH MEAN SD

Team Process Information Sharing Information Integration Conflict Coordination

Team Outcomes Effectiveness Viability

Individual Cognitive Ability

Individual Role Performance

Figure 1:

A Conceptual Model of the Cognitive AbilityTeam Performance Relationship

CONCLUSION

Team-level cognitive ability appears to be positively related to team performance for three operational definitions that focus on amount, but the magnitude of all three relationships appears to vary as a function of one or more contextual variables. Although we have highlighted several moderator candidates pertaining to the task, there is currently not enough data available to identify the factors responsible for the observed variation in these relationships. Most important, the findings from this study suggest that teamlevel cognitive ability indices may not be good predictors of team performance in some organizational settings. More research is needed to identify the team contexts where cognitive ability is most and least useful as a predictor of team performance.

REFERENCES
Barrick, M. R., Stewart, G. L., Neubert, M. J., & Mount, M. K. (1998). Relating member ability and personality to work-team processes and team effectiveness. Journal of Applied Psychology, 83(3), 377-391.

Devine, Philips / COGNITIVE ABILITY IN TEAMS

529

Blades, J. W. (1976). The influence of intelligence, task ability and motivation on group performance (Unpublished doctoral dissertation, University of Washington). Dissertation Abstracts International, 37, 1463. Bottoms, N. (1998). Group decision-making in a simulated business environment. Unpublished bachelors honors thesis, Indiana UniversityPurdue University, Indianapolis. Brandt, K. M. (1998). The effects of personality compatibility on team processes and team performance. Unpublished masters thesis, Indiana UniversityPurdue University, Indianapolis. Clayton, L. D. (1998). The effects of gender composition and task complexity on group processes and performance. Unpublished masters thesis, Indiana UniversityPurdue University, Indianapolis. Cohen, S. G., & Bailey, D. E. (1997). What makes teams work: Group effectiveness research from the shop floor to the executive suite. Journal of Management, 23, 239-290. Colarelli, S. M., & Boos, A. L. (1992). Sociometric and ability-based assignment to work groups: Some implications for personnel selection. Journal of Organizational Behavior, 13, 187-196. Devine, D. J. (1999). Effects of cognitive ability, task knowledge, information sharing, and conflict on group decision-making effectiveness. Small Group Research, 30, 608-634. Fiedler, F. E., & Meuwese, W.A.T. (1963). Leaders contribution to task performance in cohesive and uncohesive groups. Journal of Abnormal and Social Psychology, 67, 83-87. Gully, S. M. (1997). The influences of self-regulatory processes on learning and performance in a team training context. Unpublished doctoral dissertation, Michigan State University, East Lansing. Gurnee, H. (1937). Maze learning in the collective situation. Journal of Psychology, 3, 437-443. Guzzo, R. A. (1996). Fundamental considerations about work groups. In M. A. West (Ed.), Handbook of work group psychology (pp. 3-21). San Francisco: Jossey-Bass. Guzzo, R. A., & Dickson, M. W. (1996). Teams in organizations: Recent research on performance and effectiveness. Annual Review Psychology, 47, 307-338. Hinsz, V. B., Tindale, R. S., & Vollrath, D. A. (1997). The emerging conceptualization of groups as information processors. Psychological Bulletin, 121, 43-64. Hollenbeck, J. R., Ilgen, D. R., Sheppard, L., Ellis, A., Moon, H., & West, B. (1999, May). Person-team fit: A structural approach. Paper presented at the 14th Annual Meeting of the Society for Industrial Organizational Psychology, Atlanta, GA. Hunter, J. E., & Hunter, R. F. (1984). Validity and utility of alternate predictors of job performance. Psychological Bulletin, 96, 72-98. Hunter, J. E., & Schmidt, F. L. (1990). Method of meta-analysis: Correcting error and bias in research findings. Newbury Park, CA: Sage. Hunter, J. E., Schmidt, F. L., & Jackson, G. B. (1982). Meta-analysis: Cumulating research findings across studies. Beverly Hills, CA: Sage. Ilgen, D. R. (1999). Teams embedded in organizations: Some implications. American Psychologist, 54, 129-139. Jackson, S. E. (1996). The consequences of diversity in multidisciplinary work teams. In M. A. West (Ed.), Handbook of work group psychology (pp. 53-76). Chichester, UK: Wiley. Kabanoff, B., & OBrien, G. E. (1979). Cooperation structure and the relationship of leader and member ability to group performance. Journal of Applied Psychology, 64, 526-532.

530

SMALL GROUP RESEARCH / October 2001

Kanfer, R., & Ackerman, P. L. (1989). Motivation and cognitive abilities: An integrative/ aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psychology, 74, 657-690. Klein, K. J., Dansereau, R. G., & Hall, R. J. (1994). Level issues in theory development, data collection, and analysis. Academy of Management Review, 19, 195-229. Koslowsky, M., & Sagie, A. (1994). Components of artifactual variance in meta-analytic research. Personnel Psychology, 47, 561-574. Kozlowski, S.W.J., & Klein, K. J. (2000). A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. In S.W.J. Kozlowski & K. J. Klein (Eds.), Multilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 3-90). San Francisco: Jossey-Bass. Larson, J. R., & Christensen, C. (1993). Groups as problem-solving units: Toward a new meaning of social cognition. British Journal of Social Psychology, 32, 5-30. LePine, J. A., Hollenbeck, J. R., Ilgen, D. R., & Hedlund, J. (1997). Effects of individual differences on the performance of hierarchical decision-making teams: Much more than g. Journal of Applied Psychology, 82(5), 803-811. McGrath, J. E. (1984). Groups: Interaction and performance. Englewood Cliffs, NJ: Prentice Hall. Mento, A. J., Steel, R. P., & Karren, R. J. (1987). A meta-analytic study of the effects of goal setting on task performance. Organizational Behavior and Human Decision Processes, 39, 52-83. Milliken, F. J., & Martins, L. L. (1996). Searching for common threads: Understanding the multiple effects of diversity in organizational groups. Academy of Management Review, 21, 402-433. Neisser, U., Boodoo, G., Bouchard, T. J., Jr., Boykin, A. W., Brody, N., Ceci, S. J., Halpern, D. F., Lohelin, J. C., Perloff, R., Sternberg, R. J., & Urbina, S. (1996). Intelligence: Knowns and unknowns. American Psychologist, 51, 77-101. Neuman, G. A., & Wright, J. (1999). Team effectiveness: Beyond skills and cognitive ability. Journal of Applied Psychology, 84, 376-389. OBrien, G. E., & Owens, A. G. (1969). Effects of organizational structure on correlations between member abilities and group productivity. Journal of Applied Psychology, 53, 525-530. OConnell, M. S. (1994). The impact of team leadership on self-directed work team performance: A field study (Unpublished doctoral dissertation, University of Akron). Dissertation Abstracts International, 55, 1699. Robinson, W. S. (1950). Ecological correlations and the behavior of individuals. American Sociological Review, 15, 351-357. Rosenthal, R. (1991). Meta-analytic procedures for social research. Newbury Park, CA: Sage. Rousseau, D. M. (1985). Issues of level in organizational research. In L. L. Cummings & B. Staw (Eds.), Research in organizational behavior (Vol. 7, pp. 1-37). Greenwich, CT: JAI. Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262-274. Spector, P., & Suttell, B. J. (1957). An experimental comparison of the effectiveness of three patterns of leadership behavior (Unpublished technical report, AIR-196-57-FR-164). Washington, DC: American Institute for Research.

Devine, Philips / COGNITIVE ABILITY IN TEAMS

531

Steiner, I. D. (1972). Group process and productivity. New York: Academic Press. Sternberg, R. J. (1997). The concept of intelligence and its role in lifelong learning and success. American Psychologist, 52, 1030-1037. Stevens, M. J., Jones, R. G., Fischer, D. L., & Kane, T. D. (1999). Team performance and individual effectiveness: Personality and team context. Paper presented at the 14th Annual Meeting of the Society for Industrial and Organizational Psychology, Atlanta, GA. Sundstrom, E. (1999). The challenges of supporting work team effectiveness. In E. Sundstrom (Ed.), Supporting work team effectiveness: Best management practices for fostering high performance (pp. 3-23). San Francisco: Jossey-Bass. Sundstrom, E., & Futrell, D. A. (1999). Work group productivity and general mental ability: Two field studies of group composition. Unpublished manuscript. Terborg, J. R., Castore, C., & DeNinno, J. A. (1976). A longitudinal field investigation of the impact of group composition on group performance and cohesion. Journal of Personality and Social Psychology, 34(5), 782-790. Thorndike, E. L. (1939). On the fallacy of imputing the correlations found for groups to the individuals or smaller groups composing them. American Journal of Psychology, 52, 122-124. Tziner, A., & Eden, D. (1985). Effects of crew composition on crew performance: Does the whole equal the sum of its parts? Journal of Applied Psychology, 70(1), 85-93. Tziner, A., & Vardi, Y. (1983). Ability as a moderator between cohesiveness and task crew performance. Journal of Occupational Behavior, 4, 137-143. Wagner, R. K. (1997). Intelligence, training, and employment. American Psychologist, 52, 1059-1069. Weingart, L. R. (1997). How did they do that? The ways and means of studying group process. Research in Organizational Behavior, 19, 189-239. Whitener, E. M. (1990). Confusion of confidence intervals and credibility intervals in meta-analysis. Journal of Applied Psychology, 75, 315-321. Williams, K. Y., & OReilly, C. A., III. (1998). Demography and diversity in organizations: A review of 40 years of research. Research in Organizational Behavior, 20, 77-140. Williams, W. M., & Sternberg, R. J. (1988). Group intelligence: Why some groups are better than others. Intelligence, 12, 351-377. Wonderlic Personnel Test. (1992). Wonderlic Personnel Test & scholastic level exam users manual. Milwaukee, WI: Author. Wood, R. E. (1986). Task complexity: Definition of the construct. Organizational Behavior and Human Decision Processes, 37, 60-82. Zukin, L. (1999). Individual differences and individual-in-team performance: An investigation of the impact of personality and ability in a decision making team with distributed information. Unpublished doctoral dissertation, George Mason University, Fairfax, VA.

Dennis J. Devine is an assistant professor of industrial and organizational psychology at Indiana UniversityPurdue University, Indianapolis. He received his Ph.D. in industrial and organizational psychology from Michigan State University in 1996. His research interests include team dynamics and effectiveness, jury decision making, expert-novice differences in job performance, and organizational recruitment.

532

SMALL GROUP RESEARCH / October 2001

Jennifer L. Philips is a 1st-year graduate student in the industrial and organizational psychology Ph.D. program at the University of Akron. Her research interests include the effects of team member composition on team effectiveness, group decision making, legal influences on human resources decision making, organization culture or climate, and organizational citizenship behaviors.

S-ar putea să vă placă și