Documente Academic
Documente Profesional
Documente Cultură
http://adh.sagepub.com An Integrative Model of Competency Development, Training Design, Assessment Center, and Multi-Rater Assessment
Hsin-Chih Chen and Sharon S. Naquin Advances in Developing Human Resources 2006; 8; 265 DOI: 10.1177/1523422305286156 The online version of this article can be found at: http://adh.sagepub.com/cgi/content/abstract/8/2/265
Published by:
http://www.sagepublications.com
On behalf of:
Additional services and information for Advances in Developing Human Resources can be found at:
Email Alerts: http://adh.sagepub.com/cgi/alerts Subscriptions: http://adh.sagepub.com/subscriptions Reprints: http://www.sagepub.com/journalsReprints.nav Permissions: http://www.sagepub.com/journalsPermissions.nav Citations http://adh.sagepub.com/cgi/content/refs/8/2/265
An Integrative Model of Competency Development, Training Design, Assessment Center, and Multi-Rater Assessment
Hsin-Chih Chen Sharon S. Naquin
The problem and the solution. Although assessment center has been proven effective in predicting performance, the issue of establishing construct-related validity of assessment center is still unsolved, resulting in an unmet research challenge.Woehr and Arthur asserted that the lack of construct-related validity in assessment center literature is primarily due to issues of design and development.This article focuses on the design aspect of assessment center to develop an integrative competency-based assessment center model that links competency development, training design, assessment center, and multi-rater assessment together. Built around validity (particularly construct-related) issues of assessment center, the model guides scholarly practitioners on how to design a competency-based assessment center that has potential to improve construct-related validity and capability to build into training design and assessment and other human resource functions. Nine propositions related to validity were developed in accordance with the model to evoke future research. Practical implications are also provided. Keywords: assessment center; competency modeling; performance assessment
A review of related literature indicates that researchers have not reached a clear definition of competency. The term sometimes refers to outputs of competent performers and sometimes refers to underlying characteristics that enable an individual to achieve outstanding performance (Dubois & Rothwell, 2004; Hoffmann, 1999; McLagan, 1997). Most definitions, however, relate to exemplary performers or performance in a specific job or job level (Boyatzis, 1982), whereas a relevant term, core competency, is tied to strategic, future-oriented,
Advances in Developing Human Resources Vol. 8, No. 2 May 2006 265-282 DOI: 10.1177/1523422305286156 Copyright 2006 Sage Publications
266
May 2006
collective functions in organizational level (Moingeon & Edmondson, 1996; Prahalad & Hamel, 1990). Thus, we have adopted an overarching perspective that combines both the performance and strategic aspects associated with the various definitions found in the literature. We consider competency to refer to the underlying individual work-related characteristics (e.g., skills, knowledge, attitudes, beliefs, motives, and traits) that enable successful job performance, where successful is understood to be in keeping with the organizations strategic functions (e.g., vision, mission, uniqueness, future orientation, success, or survival). A similar construct, competency development or competency modeling, refers to the process of identifying a set of competencies representative of job proficiency. With the generic term just defined, competency development can enhance various human resources (HRs) and organizational development activities including personnel selection, job promotion, training and development, training needs analyses, performance appraisal, individual career planning, HR planning, placement, strategic planning, succession planning, compensation, and recruitment (Byham & Moyer, 2004; Howard, 1997; Lucia & Lepsinger, 1999). It is understandable that assessment strategies and methodologies are closely related to competency, and a common assessment strategy is the use of assessment center. Assessment center is not a brick-and-mortar research center or building. It is, rather, an abstract concept that exists in practice and refers to standardized procedures used for assessing behavior-based or performancebased dimensions whereby participants are assessed using multiple exercises and/or simulations (Thornton, 1992). Common simulation exercises used in an assessment center setting include oral presentations, leaderless group discussions, role-playing, in-basket exercises, oral fact-finding, business games, and integrated simulations (Thornton & Mueller-Hanson, 2004). Dimensions for assessment (equated to competencies) are usually identified through job analysis. However, it should be noted that although the terms job analysis and competency modeling are often used interchangeably, the two differ in terms of assessment of reliability, strategic focus, and expected outcome (Shippmann et al., 2000).
Research Problems
Research on assessment center has evolved over the past few decades as researchers have moved from focusing on an understanding of what an assessment center is and how it works to establishing some criterion-related validity and generalizability (Howard, 1997). However, the issue of establishing constructrelated validity of assessment center is still unsolved, resulting in an unmet research challenge (Robie, Osburn, Morris, Etchegaray, & Adams, 2000). Construct-related validity refers to the degree to which a theoretical concept is operationalized and the degree to which the operational inference exhibits
267
consistency of what a researcher intends to measure. In current assessment center literature, it fundamentally refers to discrepancies between competencies and the measures that are used to demonstrate such competencies in assessment center activities. Woehr and Arthur (2003) asserted that the lack of construct-related validity in assessment center literature is primarily due to issues of design and development. This challenge also clearly relates to an ongoing debate as to whether the design of assessment center should be based on dimensions and competencies or tasks and exercises (Byham, 2004; Howard, 1997; Joyce, Thayer, & Pond, 1994; Lowry, 1995). On the other hand, as mentioned, although assessment center has received a wide range of applications in HR-related functions, the applications appear to be piecemeal and not systematically connected. More important, its utilization in human resource development (HRD) practice is relatively sparse (Chen, 2006 [this issue]). An obviously and immediately useful application of assessment center to HRD is for assessing effectiveness of competency-based training. Because HRD is deeply rooted in the design and development of learning activities across various levels, integrating the HRD perspective into assessment center has strong potential to contribute to assessment center literature in resolving the construct-related validity issues of assessment center. Meanwhile, the application of the assessment center to HRD can also help the HRD field, particularly the design of training assessment, move further away from cognitive or reactive assessments toward behavioral assessmenta more reliable measure. Another issue existing in assessment center literature is that changes of individual behavior can be readily observed through assessment center activities. It is regrettable that the ability to assess implicit behavior (e.g., motivation, emotion, beliefs, values, visions, etc.) through an assessment center is limited. In contrast, multi-rater assessment such as dual-ratings assessment can potentially be more effective in assessing implicit behavioral competencies, but these methods are not able to provide the level of information regarding tangible outcomes that assessment center can. This is primarily due to the fact that assessment center typically involves observation of outcomes or performance behaviors, whereas the multi-rater assessment method relies on perceptions and/or memories of behavior. Accordingly, assessment center and multi-rater assessment seem to complement each other perfectly (Howard, 1997).
Research Purposes
The purpose of this article is to develop a competency-based assessment center design model that integrates competency development, training design, assessment center activities, and multi-rater assessment strategies. Because of the integration, the competency-based assessment center evidently differs from traditional assessment center in scope. As mentioned, we have adopted an overarching definition of competency that includes the organizations strategic
268
May 2006
functions, so the traditional assessment center, which mainly serves selection and promotion purposes, no longer satisfies the extended scope. Indeed, traditional assessment center is developed through job analysis to identify individual work-related characteristics. Such a mechanism is current in nature; it has often overlooked or has limited ability to appropriately respond to an organizations strategic, future-oriented functions. To the contrary, a competencybased assessment center rooted by competency development and integrated with multi-rater assessment can overcome or complement the limitation. This model attempts to serve multiple purposes. First, it introduces a systematic approach to linking competency development, training design, assessment center strategies, and multi-rater assessment. Second, it provides a design process that has the potential to enhance the construct-related validity of an assessment center. Third, the model helps develop a set of propositions for future research.
Conceptual Framework
The model is guided by best practice and research in competency-based development, training design, assessment center, and multi-rater assessment. It is important to note that the following notions are not intended to be comprehensive. Instead, they provide readers with a generic understanding of how the model is framed. Only key concepts that underpin the purpose of this article are included. Competency-Based Development Common practice of competency development is through quantitative and/or best practice approaches to develop a set of competencies characterized by individual skills, knowledge, behaviors, and traits. The quantitative approach is through reorganization of exemplary performers on a specific job and identification of their characteristics toward the successful performance on the job (e.g., Spencer & Spencer, 1993). The best practice approach is through adoption of an existing competency model (e.g., leadership skills identified by a benchmarked organization or institute) and is often followed by a dynamic customization of competencies for use in a particular organization (e.g., Naquin & Holton, 2003). Competency-Based Training Design A generic difference between traditional training and competency-based training designs is that the former is learning-focused whereas the latter is based on performance. Accordingly, competency-based training must tie to work-related performance outcomes such as transfer of learning or behavior change. Blank (1982) identified four major characteristics of competency-based
269
programs that are essential for competency-based training design: outcome driven, trainee centered, task mastering, and high level of proficiency in a jobrelated setting. Assessment Center As mentioned, the assessment center is a standardized procedure used for assessing behavior-based or performance-based dimensions whereby participants are assessed using multiple exercises and/or simulations. According to Joiner (2000), an assessment center should include 10 key components: (a) job analysis, behavior classification, (b) assessment techniques, (c) multiple assessments, (d) simulations, (e) assessors, (f) assessor training, (g) recording behavior, reports, and (h) data integration. Common errors of assessment centers (Caldwell, Thornton, & Gruys, 2003) were also considered in developing the model. These errors as described by Caldwell et al. (2003) include (a) poor planning, (b) inadequate job analysis, (c) weakly defined dimensions, (d) poor exercises, (e) lack of pretest evaluation, (f) unqualified assessors, (g) inadequate assessor training, (h) inadequate candidate preparation, (i) sloppy behavior documentation and scoring, and (j) misuse of results. Multi-rater Assessment Multi-rater assessment is also known as 360-degree feedback assessment or multisource assessment. Similar to the assessment center that has gone beyond its traditional application for selection and promotion, research related to 360 degree has also reached beyond its traditional application for management development to other HR functions such as performance appraisal (Toegel & Conger, 2003). Multi-rater assessments collect information from individuals and their subordinates, peers, supervisors, and customers with regard to their perceptions of research interests, such as performance and developmental feedback. The process involves an individuals self-evaluation against a set of criteria and in comparison to norms from other raters about the individual. In other words, multisource assessment or feedback is through an objective lens and is a dynamic process that provides developmental or evaluation information about ones performance or behavior. Wimer and Nowack (1998) suggested 13 common mistakes using 360-degree feedback including (a) unclear purpose, (b) using it as a subtitle for managing a poor performer, (c) lack of pilot testing, (d) no key stakeholder involvement, (e) insufficient communication among people involved in the process, (f) compromising confidentiality, (g) lack of clarifying the feedback to be used, (h) insufficient resources for implementation, (i) lack of clarification of ownership of the data, (j) unfriendly administration and scoring, (k) improper link to existing systems without a pilot, (l) treating it as an end, not a process, and (m) lack of measuring effectiveness.
270
May 2006
271
Practical Track (Step by Step Competency-Based Assessment Center Design) Building Hierarchical Competency System Develop a three-level competency system. Embrace both qualitative and quantitative approaches to develop and refine definitions of competencies.
Proposition 1: Using factor analysis, in addition to qualitative competency development, to examine construct-related validity of competencies will help refine the definition of the competency and enhance the validity of competencies-based assessment center. Proposition 2: Linking subcompetency to training program and assessment center will improve the constructrelated validity of the competencies-based assessment center. Proposition 3: Using subcompetencies, which collectively represent competencies in a more observable way, to develop the competency-based assessment center activity matrix will enhance the construct-related validity of competency-based assessment center. Proposition 4: Differentiating between explicit-behavioral and implicit-behavioral subcompetencies will improve the construct-related validity of competency-based assessment center where explicit-behavioral subcompetencies are measured by traditional assessment center mechanisms, and implicit-behavioral subcompetencies are assessed by multi-rater assessments. Proposition 5: Using a numeric rating scale rather than a dichotomous scale will lead to an appropriate assessment center activity selection. Therefore, the numeric scale will indirectly influence the construct-related validity of competency-based assessment center. Proposition 6: Measuring no more than 10 sub-competencies in an activity will enable assessors to accurately assess the sub-competencies that are supposed to be measured. Doing this will increase the construct-related validity of the competencybased assessment center. Proposition 7: Using customized assessment center materials which are designed to closely relate to participants work settings will lead to a stronger predictive validity of competencybased assessment center. Proposition 8: Developing customized materials for different individuals (e.g., administrators, assessors, resource persons, and role players, etc.) involved in the competency-based assessment center and building extraneous factors (e.g., setting, technology, and level of difficulty of indicators) into design will lead individuals to better understand the process of assessment center and therefore can indirectly improve the construct-related validity of competency-based assessment center (rating accuracy). Proposition 9: Well-trained assessors will contribute to criterionrelated validity of competency-based assessment center.
Linking Subcompetency to Competency-Based Training Design and Competency-Based Assessment Center Subcompetncies serve as central links to training design and competency-based assessment center design (See Figure 2 for details).
Developing Subcompetency-Assessment Center Activity Matrix Use subcompetency rather than competency to develop the matrix. Build multi-rater assessment into matrix design.
Differentiating Implicit and Explicit Behavior Differentiate explicit-behavioral and implicit-behavioral subcompetencies.
Determining Appropriate Competency-Based Assessment Center Activities Use numeric scale rather than dichotomous scale at the subcompetency level to determine appropriateness of assessment center activities.
Determining Performance Outcomes for Activities Performance outcomes are informed by job outcomes in training design and subcompetencies in competency development (See Figure 2 for details). Leverage number of performance outcomes in an activity.
Designing Competency-Based Assessment Center Materials Use customized materials to enhance fidelity. Develop action-oriented supporting performance indicators.
Selecting and Developing Assessors Select assessors from two levels higher than individuals to be assessed in the organization. Clearly identify training objectives and performance guidelines in the assessor training. Use an experienced, skilled trainer for assessor training.
FIGURE 1:
The collected data can be analyzed through factor analysis (e.g., Naquin & Chen, 2006) to examine the relationship between competencies and subcompetencies. Caldwell et al. (2003) pointed out a common error of assessment center practicesweakly defined dimensions or competencies. However,
272
May 2006
factor analysis can easily allow the researcher to determine how well the competencies were defined and can serve as a means to help refine the definitions of the competencies. Therefore, we develop the following proposition:
Proposition 1: Using factor analysis in addition to qualitative competency development to examine construct-related validity of competencies will help refine the definitions of the competencies and enhance the validity of competency-based assessment center.
Linking Subcompetency to Competency-Based Training Design and Competency-Based Assessment Center In the hierarchical competency system that the model depicts, the subcompetencies serve as key links between competency-based training design and the design of the assessment center. In a competency-based training design, the subcompetencies serve as desired job outcomes, representing what training participants are expected to perform when they return to their respective jobs. In an assessment center design, the subcompetencies serve as performance outcomes that participants are expected to demonstrate in an assessment center. The relationships among competency model, the training program, and the assessment center can be found in Figure 2. As shown in Figure 2, competency-based assessment center is linked through subcompetencies, job outcomes, or performance outcomes. The statements for these three components will be identical whereas each of their supporting components (e.g., step or procedures, learning objectives, or performance indicators) may be different in description. Through this design, the intended measures for each stage are strictly connected. Therefore, we develop the following proposition:
Proposition 2: Linking subcompetency to a training design and assessment center will improve the construct-related (competency) validity of competency-based assessment center.
Developing Subcompetency Assessment Center Activity Matrix Developing a competency exercise matrix is a basic requirement for assessment center development (Joiner, 2000). Current practices for developing such a matrix are conducted at competency level, which is an abstract level (e.g., Halman & Fletcher, 2000). However, using an abstract competency to develop assessment center exercises can potentially jeopardize the validity of selected assessment center activities because such a matrix cannot identify the most appropriate activities to assess the competencies. For example, from a generic view, one may select role-play activities to assess an individuals communication competency. However, the communication competency can
273
Assessment Center2 Performance Outcomes69 Performance Indicators10 Competency Model1 Competencies3 Subcompetencies46 Steps or Procedures5 Job Outcomes678 Learning Objectives78 Multi-Rater Assessment2 Performance Outcomes69 Performance Indicators10 Competency-Based Assessment Center12 Training Program1
1. Competency model triggers training and competency-based assessment center designs. 2. Competency-based assessment center includes a traditional assessment center and a multi-rater assessment. 3. Competencies are in collective, abstract form. 4. Sub-competencies are more measurable, specific but less collective than competencies. 5. Steps or procedures are observable, specific, and behavior-based. Steps or procedures are in very specific form and described in support of sub-competencies, which are in terms to support competencies. 6. Sub-competencies inform job outcomes in training program design and performance outcomes in assessment center design and multi-rater assessment center. Statements of sub-competencies, job outcomes, and performance outcomes are identical. 7. Learning objectives are in support of job outcomes in a training design. 8. Job outcomes are work-related outcomes, whereas learning objectives are supported by training materials. 9. Performance outcomes are general indicators that assessment center and multi-rater assessments are targeted to measure. 10. Performance indicators are specific indicators in support of performance outcomes.
FIGURE 2:
Relationship Between Competency Model, Training Program, Assessment Center, and Multi-rater Feedback Assessment
encompass written and oral skills, hence the role-play does not address all necessary communication skills. Therefore, we develop the following proposition:
Proposition 3: Using subcompetencies, which collectively represent competencies in a more observable way, to develop the assessment center activity matrix will enhance the construct-related validity of competency-based assessment center.
274
May 2006
Differentiating Implicit and Explicit Behavior As previously mentioned, an assessment center has limited ability in measuring individuals implicit characteristics. Although one may argue that implicit behavior can be measured by transferring it to explicit format, it evidently cannot be effectively managed in an assessment center. This is because implicit behavior is fairly complex and enduring. If not appropriately rendered, it can easily jeopardize the validity of the assessment center. Consequently, assessing the implicit-behavioral competencies in a traditional assessment center could create more problems than it can solve. It is very likely that the construct-related validity issue from an assessment center results from a lack of differentiation between explicit-behavioral and implicit-behavioral competencies. A multi-rater assessment appears to be a more effective tool in assigning implicit behavior. Integrating multi-rater assessment in the assessment center design also provides the flexibility to reduce complexity and avoid common errors. (See more discussions in latter section.) Therefore, we develop the following proposition:
Proposition 4: Differentiating between explicit-behavioral and implicit-behavioral subcompetencies will improve the construct-related validity of competency-based assessment center, where explicit-behavioral subcompetencies are measured by traditional assessment center mechanisms and implicit-behavioral subcompetencies are assessed by multi-rater assessments.
Determining Appropriate Competency-Based Assessment Center Activities Current research in developing the competency (or subcompetency) activity matrix uses simple check marks to determine the exercise to be used for a particular competency. However, this approach provides no information on how well the competencies fit the exercises. We suggest using numeric ratings such as a 5-point Likert-type scale to determine the appropriateness of assessment center activities to the subcompetencies by treating the matrix as a questionnaire. This approach not only helps in alleviating subjective decisions but also provides more valid information on the degree to which subcompetencies fit assessment center activities. This approach will require a group of participants to rate the questionnaire and then calculate aggregated scores on the collected data. Although it sounds impractical to involve a group of individuals to rate the matrix, if this approach can enhance the competency-based assessment center design validity, it should be considered. As a matter of fact, as long as each activity in a matrix development is clearly defined, any manager or trainer in an organization should be able to serve as raters for the questionnaire. Moreover, according to the Guidelines and Ethical Considerations for Assessment Center Operations (Joiner, 2000), to increase the chance of
275
obtaining objective data, each dimension or competency should include more than one assessment exercise. The numeric scale has merit in assisting a competency-based assessment center designer to select the most appropriate activity to be used. The following strategies are designed to help determine the most appropriate activities to be used in an assessment center: (a) Select two top-ranked activities for each of the subcompetencies. (b) If more than two activities are tied as top-ranked, consider using all of them. (c) If no rating for a particular subcompetency is greater than 3.0, its applicability to any of the exercises is low. Therefore, consider a multi-rater questionnaire as a more appropriate approach to assess the subcompetency. Based on the rationale just discussed, the following proposition is developed:
Proposition 5: Using a numeric rating scale (along with appropriate subcompetency selection strategies) rather than a dichotomous scale will lead to an appropriate assessment center activity selection. Therefore, the numeric scale will indirectly influence the construct-related validity of competency-based assessment center.
Determining Performance Outcomes for Activities This step requires composing a list of appropriate subcompetencies or performance outcomes (the two top-ranked activities) related to each of the activities. These performance indicators will be aligned with the activity design. It is important to note that the competency-based assessment center designers should not overrely on quantitative data as presented here to design the assessment center activities, because quantitative data are only meaningful if well interpreted. Activity designers should always review these subcompetencies to examine the appropriateness of fit. Our suggestion is to move less congruent subcompetencies in an activity to multi-rater assessment. In addition, research suggests that one activity should not include too many measures; otherwise, the assessors ratings could be biased by intuition due to the limitation of ones cognitive abilities in differentiating complex situations in time-limited situations (Lievens & Klimoski, 2001). When the number of subcompetencies increases, a competency-based assessment center activity designer should use judgment to avoid the problem of measuring too many subcompetencies in a single exercise or activity. Thornton (1992) suggested 5 to 10 dimensions (subcompetencies in this context) to be assessed for various assessment centers, whereas Thornton and Mueller-Hanson (2004) stated that in practice, consultants only measure 4 or 5 dimensions in an exercise. Synthesizing the findings and suggestions in these literatures, it is reasonable to assert that no more than 10 dimensions are practical for an activity.
276
May 2006
On the other hand, from a cost-effective perspective, for an activity with fewer than 5 subcompetencies to be measured, it is also reasonable to eliminate the activity and move the subcompetencies classified in this activity to multi-rater assessment. Therefore, we develop the following proposition:
Proposition 6: Measuring no more than 10 subcompetencies in an activity will enable assessors to accurately assess the subcompetencies that are supposed to be measured. Doing this will increase the construct-related validity of the competency-based assessment center measurement.
Designing Competency-Based Assessment Center Materials There are two methods or models for developing assessment center materials: using off-the-shelf materials and using customized materials. Thornton (1992) suggested that fidelity of assessment center design would help improve the validity of performance outcomes. The notion of fidelity is essential to design activities or cases that closely relate to participants day-in-thelife work situation. Therefore, the use of a customized model adds more value to this systematic competency-based assessment center design. Based on the rationale, the following proposition is developed:
Proposition 7: Using customized assessment center materials, which are designed to closely relate to participants work settings, will lead to a stronger criterion-related (predictive-related) validity of competency-based assessment center.
Thornton and Muller-Hanson (2004) suggested that several sets of exercise materials must be designed for various individuals involved in the exercises. These individuals include participants, administrators, assessors, resource persons, and role-players. In addition to determining the subcompetency (performance outcome) or supporting performance indicators for exercise development, a competency-based assessment center designer should also consider factors such as setting, technology, and level of difficulty of the indicators when designing exercise materials. Therefore, the following proposition is developed:
Proposition 8: Developing customized materials for different individuals (e.g., administrators, assessors, resource persons, role-players, etc.) involved in the competencybased assessment center and building extraneous factors (e.g., setting, technology, and level of difficulty of indicators) into design will lead individuals to better understand the process of assessment center and, therefore, can indirectly improve the constructrelated validity of competency-based assessment center (rating accuracy).
In developing multi-rater assessments, the supporting performance indicators should be as action-oriented as possible (e.g., starting with an action verb). An appropriate supporting performance indicator may include the performance
277
to be measured, the condition in which the performance occurs, and the criterion to determine effectiveness or efficiency of the performance (Mager, 1997). Selecting and Developing Assessors Selecting and developing qualified assessors usually go hand in hand. In selecting assessors, Spychalski, Quinones, Gaugler, and Pohley (1997) found that best practice incorporates line or staff management as assessors and these assessors are generally two organizational levels higher than the individuals to be assessed, whereas some assessment center practices used psychologists as assessors. In addition, research on the effect of assessors individual background shows mixed results. For example, Gaugler, Rosenthal, Thornton, and Bentson (1987) found that an assessment center that used psychologists as assessors exhibited higher criterion-related validity than managerial assessors. However, Thomson (1970) found no significant differences between ratings of psychologist and manager assessors. Although the assessment center guidelines suggested considering professional psychologists as assessors, from a practical standpoint it is plausible to select assessors from the target organization. The more important point is perhaps to keep these selected assessors (e.g., managers) well trained on how to assess assessees performance before engaging in an assessment center activity. In addition, according to the guidelines, the assessor training should clearly state training objectives and performance guidelines. The objectives of assessor training are to facilitate assessors gaining reliable and accurate judgments. Contents in the assessor training may include (a) knowledge and understanding on assessment dimensions, (b) definitions of dimensions, (c) relationship to job performance, (d) examples of effective and ineffective performance, (e) simulations on exercises to be assessed, (f) ratings issues, (g) data integration, (h) feedback procedures, and so on. Training length should be determined in connection with other considerations, such as trainer and instructional design, assessor capability, and assessment program. It is also important to consider establishing a continually improving training system to help assessors maintain skills, knowledge, and attitudes. More detailed issues related to assessor training can be found in the assessment center guidelines (see Joiner, 2000). Finally, a trainer of assessor training should be familiar with simulation exercises, have a deep understanding of issues related to assessor training, and continually communicate with competency-based assessment center designers and a program champion. This is because the competency-based assessment center designers are expert in functions of an assessment center design, whereas program champions have broader insights on how the program works. Both can contribute to the success of assessor training if the communication system is well established and utilized.
278
May 2006
Proposition 9: Well-trained assessors will contribute to the criterion-related validity of competency-based assessment center.
279
Providing Comparative Information This strategy mainly deals with providing information on what other organizations in the same industry have done regarding an assessment center and how well the assessment center has helped the organization improve performance. The information allows decision makers to justify how an assessment center can change their organization. Articulating Purposes of the Competency-Based Assessment Center and the Purposes for Which the Data Are to Be Used Because an assessment center can be used for various purposes, it is important that competency-based assessment center designers and implementers fully articulate the purposes and describe how the data collected from the center will be used. When a new tool such as a competency-based assessment center is implemented for evaluation purposes, it is inevitable that resistance will be met. Anxiety and motivation to change are often related to resistance. Therefore, articulating the purposes could be the key to reducing the anxiety and enhancing the motivation for stakeholders to adopt the program. Implementing a Pilot Test If a process of a competency-based assessment center design aims to facilitate a customized assessment center, implementing a pilot test can provide formative feedback for program design. The pilot test also helps in examining practicality and other issues (e.g., culture fit) that may arise when implementing an assessment center in an organization. Communicating Responsibilities The implementation of a competency-based assessment center cannot solely rely on designers or implementers. Organizational stakeholders involvement and support will be key to its success. Therefore, communicating responsibilities before a center is implemented is as important as the other strategies proposed here.
280
May 2006
learning outcomes of a statewide leadership and management training program in the state of Louisiana has adopted the concepts and processes proposed in this article (see Melancon & Williams, 2006 [this issue]). The adoption allows researchers to empirically examine all the propositions. On the other hand, these propositions could also be examined individually. For example, one may assess whether a set of subcompetency definitions refined by a result of a factor analysis contributes to the enhancement of a competency-based assessment center. Another example would be to examine whether differentiating explicitbehavioral subcompetencies (as measured by a traditional assessment center) and implicit-behavioral subcompetencies (as measured by multi-rater assessment) will lead to an improvement of construct-related validity of competencybased assessment center.
Conclusions
Traditional assessment centers have been challenged by lack of strong construct-related validity. This articlethrough a systemic, integrative perspectivefocuses on design aspects of a competency-based assessment center to enhance validity issues of assessment centers. The integrative model not only expands the scope of traditional assessment centers by incorporating multi-rater assessment into design but also guides HRD practitioners on how to design a competency-based assessment center that has potential to improve construct-related validity and has capability to build into training design, assessment, and other HR functions. In addition, the model provides a set of research propositions to be examined.
References
Blank, W. E. (1982). Handbook for developing competency-based training programs. Englewood Cliffs, NJ: Prentice Hall. Boyatzis, R. E. (1982). The competent manager: A model for effective performance. New York: John Wiley. Byham, W. C. (2004). Developing dimension-competency-based human resource systems. Retrieved March 25, 2004, from http://www.ddiworld.com/research/ publications.asp Byham, W. C., & Moyer, R. P. (2004). Using competencies to build a successful organization. Retrieved March 25, 2004 from http://www.ddiworld.com/research/ publications.asp Caldwell, C., Thornton, G. C., & Gruys, M. L. (2003). Ten classic assessment center errors: Challenges to selection validity. Public Personnel Management, 32, 73-88. Chen, H.-C. (2006). Assessment center: A critical mechanism for assessing HRD effectiveness and accountability. Advances in Developing Human Resources, 8(2), 247-264. Dubois, D. D., & Rothwell, W. J. (2004). Competency-based human resource management. Palo Alto, CA: Davies-Black.
281
Dulewicz, V. (1991). Improving assessment centers. Personnel Management, 23, 50-55. Gaugler, B. B., Rosenthal, D. B., Thornton, G. C., III, & Bentson, C. (1987). Metaanalysis of assessment center validity. Journal of Applied Psychology, 72, 493-511. Halman, F., & Fletcher, C. (2000). The impact of development centre participation and the role of individual differences in changing self-assessments. Journal of Occupational and Organizational Psychology, 73, 423-442. Hoffmann, T. (1999). The meanings of competency. Journal of European Industrial Training, 23, 275-285. Holton, E. F., III, & Lynham, S. A. (2000). Performance-driven leadership development. Advances in Developing Human Resources, 6, 1-17. Howard, A. (1997). A reassessment of assessment centers: Challenges for the 21st century. Journal of Social Behavior and Personality, 12, 13-52. Joiner, D. A. (2000). Guidelines and ethical consideration for assessment center operations: International task force on assessment center guidelines. Public Personnel Management, 29, 315-331. Joyce, L. W., Thayer, P. W., & Pond, S. B., III. (1994). Managerial functions: An alternative to traditional assessment center dimensions? Personnel Psychology, 47, 109-121. Lievens, F., & Klimoski, R. J. (2001). Understanding the assessment center process: Where are we now? In C. L. Cooper, & I. T. Robertson (Eds.), International Review of Industrial and Organizational Psychology (Vol. 16). New York: John Wiley. Lowry, P. E. (1995). The assessment center process: Assessing leadership in the public sector. Public Personnel Management, 24, 443-450. Lucia, A. D., & Lepsinger, R. (1999). The art and science of competency models: Pinpointing critical success factors in organizations. San Francisco: Jossey-Bass. Mager, R. F. (1997). Preparing instructional objectives. Atlanta, GA: CEP Press. McLagan, P. A. (1997). Competencies: The next generation. Training and Development, 51, 40-47. Melancon, S. C., & Williams, M. (2006). Competency-based assessment center design: A case study. Advances in Developing Human Resources, 8(2), 283-314. Moingeon, B., & Edmondson, A. (1996). Organizational learning and competitive advantage. Thousand Oaks, CA: Sage. Naquin, S., & Chen, H.-C. (2006). Construct validation of the Louisiana Managerial/Supervisory Survey. In F. M. Nafukho & H.-C. Chen (Eds.), Academy of Human Resource Development Conference Proceedings (pp. 1400-1407). Bowling Green, OH: Academy of Human Resource Development. Naquin, S. S., & Holton, E. F., III. (2003). Redefining state government leadership and management development: A process for competency-based development. Public Personnel Management, 32, 23-46. Prahalad, C. K., & Hamel, G. (1990). The core competence of the corporation. Harvard Business Review, 68, 79-91. Robie, C., Osburn, H. G., Morris, M. A., Etchegaray, J. M., & Adams, K. A. (2000). Effects of the rating process on the construct-related validity of assessment center dimension evaluations. Human Performance, 13, 355-370. Shippmann, J. S., Ash, R. A., Pattista, M., Carr, L., Eyde, L. D., Hesketh, B., et al. (2000). The practice of competency modeling. Personnel Psychology, 53, 703-740. Spencer, L. M., Jr., & Spencer, S. M. (1993). Competence at work: Models for superior performance. New York: John Wiley.
282
May 2006
Spychalski, A. C., Quinones, M., Gaugler, B. B., & Pohley, K. (1997). A survey of assessment centre practices in organizations in the United States. Personnel Psychology, 50, 71-90. Thomson, H. A. (1970). Comparison of predictor and criterion judgments of managerial performance using the multitrait-multimethod approach. Journal of Applied Psychology, 54, 496-502. Thornton, G. C., III. (1992). Assessment center in human resource management. Reading, MA: Addison-Wesley. Thornton, G. C., III, & Mueller-Hanson, R. A. (2004). Developing organizational simulations. Mahwah, NJ: Lawrence Erlbaum. Toegel, G., & Conger, J. A. (2003). 360-degree assessment: Time for reinvention. Academy of Management Learning and Education, 2, 297-311. Wimer, S., & Nowack, K. M. (1998, May). 13 common mistakes using 360-degree feedback. Training and Development, 52(5), 69-70. Woehr, D. J., & Arthur, W., Jr. (2003). The construct-related validity of assessment center ratings: A review and meta-analysis of the role of methodological factors. Journal of Management, 29, 231-258. Hsin-Chih Chen, PhD, is a statistician/research analyst at Amedisys, Inc., a leading provider of home health care services, where he conducts data-driven research on quality of services, market analyses, and corporate strategies across all levels. Prior to joining Amedisys, Inc., he served as a postdoctoral researcher at Louisiana State University. He has published a number of research articles in human resource development, peer reviewed journals, and currently serves as associate editor for the 2006 International Conference Proceedings of Academy of Human Resource Development. His recent research interests include competency-based development, assessment center, transfer of learning, and effectiveness, strategy, and philosophy of human resource development. His doctorate was completed in human resource development at Louisiana State University. Sharon S. Naquin, PhD, is director of the Louisiana State University (LSU) Division of Workforce Development and an associate professor in the LSU School of Human Resource Education. She has conducted extensive research in managerial and leadership competency development and works with municipal and private agencies on strategic planning and organizational development initiatives. Chen, H.-C., & Naquin, S. S. (2006). An integrative model of competency development, training design, assessment center, and multi-rater assessment. Advances in Developing Human Resources, 8(2), 265-282.