Sunteți pe pagina 1din 6

Return to Best Papers Index

OF EMICS AND ETICS: THE DEVELOPMENT OF A CROSS-CULTURAL FACET


MEASURE OF JOB SATISFACTION

DAVID LAMOND
Macquarie Graduate School of Management
Macquarie University
NSW 2109, AUSTRALIA

PAUL E. SPECTOR
University of South Florida

GAEL MCDONALD
UNITEC, Institute of Technology

RONGXIAN WU
Suzhou University

BERNADETTE HOSKING
Royal Melbourne Institute of Technology

INTRODUCTION

There is great interest in the concept of job satisfaction, "the extent to which people like or
dislike their jobs" (Spector, 1997:2), and the number of studies on the topic is extraordinary
(Spector, 1997). The facet approach, measuring satisfaction with various aspects of the job as
well as overall satisfaction, allows researchers and organizations to find out not only whether
people are satisfied with their jobs but also, more importantly, which parts of the job are related
to satisfaction or dissatisfaction (see, for example, Hackman & Oldham, 1975, Smith, Kendall &
Hulin, 1969; Spector, 1985).

The globalization of the world economy is challenging organizations to find effective ways to
manage people across cultural and national boundaries, determining which practices/theories
work universally from those that may be country or cultural specific. There is increasing interest
in comparing findings from the U.S. and other English-speaking western nations with a variety of
other nations (e.g., Hui, Yee, & Eastman, 1995; Peterson, et al., 1995). Such cross-national
organizational research is needed because we can’t assume that American concepts and theories
transcend culture and national boundaries (e.g., Boyacigiller & Adler, 1991; Peng, Peterson, &
Shyi, 1991; Trompenaars & Hampden-Turner, 1998). Because of the key role played by job
satisfaction in many organizational theories, cross-national differences are of particular concern.
To date, the number of comparisons of job satisfaction between countries has been very limited
(Ryan, Chan, Ployhart, & Slade, 1999; Spector, 1997). One study using a facet approach, by
Spector & Wimalasiri (1986), found no difference in overall level of satisfaction between groups
of US and Singaporean workers, but significant differences between them in such areas as
supervision and coworkers. One might ask however, whether the differences identified by
Spector & Wimalasiri (1986) reflected real differences between the countries in each of those
facets, or perhaps differences in the relevance of the facets themselves.

Academy of Management Proceedings 2001 HR: B1


The Spector & Wimalasiri (1986) study utilized the Job Satisfaction Survey (JSS), a facet
instrument developed in the United States (Spector, 1985, 1997). More recent studies, by
Lamond (1999) and Lamond & Spector (2000), involving employee samples from Hong Kong,
Australia and the United States, found that the relationships between the JSS facet scores and a
group of individual and organisational variables were different for the three groups. This
suggests that there may be differences in the relevance of the facets themselves. Indeed, we may
well ask whether, for the four different national groups – Australian, American, Hong Kong and
Singaporean – there was conceptual equivalence for the facets considered. Can each of the
constructs be discussed meaningfully in each culture and do they have a similar meaning across
cultures (Brett, Tinsley, Janssens, Barsness, & Lytle, 1997)? This issue has been captured in
discussion about ‘emic’ and ‘etic’ approaches to cross-cultural research and the extent to which a
concept has universal applicability (emic) or is specific to a particular culture (etic).

For example, a recent study of item equivalence in an organizational attitude survey, for groups
of employees from the USA, Australia, Mexico and Spain, has found that the measure in question
was equivalent only across the US and Australian samples (Ryan et al., 1999). This is a
significant finding because, as Ryan et al. (1999) point out, nonequivalence in an employee
attitude measure, like a job satisfaction survey, can lead to drawing incorrect conclusions about
intercountry differences and, as a consequence, to misguided interventions. While the study by
Ryan et al. (1999) included a question on overall job satisfaction however, it did not explore
particular aspects of the job.

Most job satisfaction scales, including the JSS, use a summated rating scale format in which
statements are provided that reflect satisfaction or dissatisfaction. It is quite possible, and even
likely, that individuals in different countries would view some items differently. For example,
the JSS has an item that says the job contains too much “red tape”. This is a term that has a
specific negative connotation in western nations, such as the U.S., UK, Australia and New
Zealand, but it won’t necessarily have the same connotation universally, and there might not be
an equivalent phrase in other languages. This is particularly true of countries that tend to be high
in uncertainty avoidance (Hofstede, 1980), where people prefer to have written rules and
procedures at work. This observation, that the use of the typical attitude-type items leaves too
much room for cultural differences in interpretation, reinforces the likelihood that the findings of
Spector & Wimalasiri (1986) reflected differences in the relevance of the facets themselves rather
than real differences between the countries.

The purpose of this study was to test a new facet measure of job satisfaction – the JSS-Global –
constructed specifically to overcome the difficulties identified with the JSS when used with
different nationalities and cultural groups. The study was designed to determine whether the
JSS-Global would be a reliable measure when used in a variety of national and cultural contexts,
specifically Australia, Hong Kong, People's Republic of China, the United States and New
Zealand. The similarity in language and cultural values between Australia, the United States and
New Zealand (at least according to Hofstede, 1980; 1991), suggests it would be reasonable to
expect that the JSS-Global would be equally reliable in these three countries. On the other hand,
there are significant differences in language and cultural values between the United
States/Australia/New Zealand and Hong Kong/PRC (again according to Hofstede, 1980; 1991).

Academy of Management Proceedings 2001 HR: B2


If the JSS-Global has achieved an emic/etic balance, it should be equally reliable as a measure in
these contexts.

METHOD

Instrument

The JSS-Global instrument is a facet-based measure of job satisfaction, with 15 sets of bipolar
adjectives – related to Pay, Benefits, Non-monetary Benefits, Supervision, Recognition, Nature
of Work, Amount of Work, Coworkers, Resources, Training, Development, Promotion, Job
Security, Physical Conditions, Rules and Procedures. Respondents are told the purpose of the
survey is to find out how satisfied they are with various aspects of their job, and asked to circle
the one number for each of the pairs of descriptors that comes closest to reflecting their opinion.
For each scale, respondents are asked how they feel about that particular aspect of their job, and
given a definition to clarify the scale descriptor. For example, in relation to Pay, respondents are
asked “How do you feel about your pay (salary/wages)? Pay includes the salary you receive,
plus any other commissions or bonuses.”

The initial development of the instrument involved a series of ‘virtual focus groups’, with
management/psychology colleagues who were nationals of the Dominican Republic, Israel,
People’s Republic of China, Hong Kong, Australia, Singapore, Canada, and New Zealand. They
reviewed the instrument and provided feedback about which items/words might be
misunderstood by individuals in their native country. The instrument was also translated into
Mandarin and then back-translated to ensure item equivalence. Respondents are asked to
indicate their response on each of 4 or 5 bipolar adjective scales (for example, for pay, it was
dis/like; in/sufficient; bad/good; dis/satisfied; un/fair). At least half the adjective pairs for each
set of scales (two or three) are presented in reverse order to avoid response bias. Each of the
items produced a score between 1 and 6, corresponding to the number circled by the respondents.
Where the adjective pairs were reversed, items were reverse-scored to ensure all items were
scored in the same direction.

Sample and Administration

The total sample of 408 respondents comprised groups of employees undertaking part-time
university study in Sydney and Melbourne, Australia (n= 112), Florida, United States (n = 60),
Auckland, New Zealand (n = 83), Hong Kong (n= 93) and Suzhou, People’s Republic of China
(PRC) (n = 60). Respondents either were sent the JSS-Global by mail and asked to bring the
completed survey to their first class, or completed the survey during class. A part of the survey
gathered a series of demographic data on respondents, related to age, sex, and level of
management/non-management occupied, and organizational data related to the size and sector of
the organization (number of employees and whether private or public sector organization).
Examination of the resultant descriptive statistics for the respondents and their organizations
shows that the sample comprised a good mix of employees of different ages, gender, and levels
in their respective organisations, working for different size organisations in different sectors in
the different countries.

Academy of Management Proceedings 2001 HR: B3


Analysis

The internal consistency of the JSS-Global subscales and the overall scale was measured using
Cronbach's alpha (Cronbach, 1951). As this is a new test, being pilot tested for the first time, it
was considered appropriate to use exploratory factor analysis, for several reasons. As Stevens
(1996) points out, confirmatory factor analysis is used when the researcher ‘knows’ how many
factors there are and forces items to load only on a specific factor. This ability to specify the
factors is based on a strong theoretical and/or empirical foundation. While we had proposed a
series of facet measures that we believed would ‘make sense’ in the different cultural contexts,
based on the literature review, it was anticipated there may be differences, and it was unclear as
to what shape those differences might take. At the same time, the focus of interest was not only
on whether the constructs reflected in the JSS-Global subscales were relevant in contemporary
eastern and western contexts but, if not, what those constructs might be. Confirmatory factor
analysis might well confirm that the factor structures were different, but not what the appropriate
factor structures might be. It was decided therefore, to analyze the factor structure of the JSS-G
using principal component analysis with varimax rotation.

RESULTS AND DISCUSSION

For each of the country samples individually, the alphas are in the range 0.70-0.96. Only the
Australian sample has one low alpha (for Coworkers) and that is close at 0.65. When the
samples were combined into two groups of ‘east’ (Hong Kong, and PRC) and ‘west’ (Australia,
New Zealand, and U.S.), all the reliability scores are very satisfactory (0.78-0.97).

The factor analysis of the responses for the total sample (n = 408) extracted 15 factors with Eigen
values greater than 1, accounting for 75.5% of the total variance. Factor analysis of the responses
for the United States, Australian and New Zealand sample (n = 255) extracted 16 factors with
Eigen values greater than 1, accounting for 78.9% of the total variance. Factor analysis of the
responses for the Hong Kong and PRC sample (n = 153) extracted 15 factors with Eigen values
greater than 1, accounting for 80.4% of the total variance . The item loadings for each of the
factors, in general, confirm the subscale structures for east and west.

Only a few of the scale items did not load as expected. It was significant that, for the Total
sample, only one item did not load as expected. For the Total sample, four of the five of the
items related to the Amount of Work facet loaded on a single factor, but the scale item Right
Amount-Too Little loaded with the Nature of Work scale items. This item was problematic for
several reasons and is discussed further below.

The 'west' sample factor analysis produced 16 rather than 15 factors. This result was a function
of multiple items from Nature of Work and Coworkers subscales loading on separate factors.
The Interesting-Boring and Dis/Satisfied items from the Nature of Work subscales loaded on a
factor with the Right Amount-Too Little item (from the Amount of Work scale). The other two
Nature of Work items (Dis/Like and Meaningful/less) loaded on a factor with two of the
Coworkers items (Dis/Like and Do/not get along with). Meanwhile, the other two Coworkers
items (Make Job Easy/Difficult and Dis/Satisfied) loaded on a factor by themselves.

Academy of Management Proceedings 2001 HR: B4


The 'east' sample factor analysis produced 15 factors, but with several interesting variations.
First, all the items for the Supervision and Recognition facets loaded together on a single factor,
suggesting a strong perceived link between "quality of the supervision" and "recognition …
[received] for a job well done". Second, two of the five Non-Monetary Benefit items (Dis/Like
and Good/Bad) loaded on one factor, while the other three (In/Sufficient, Un/Satisfactory and
Un/Fair) loaded on a separate factor. This suggests some kind of cognitive-affective split in the
assessment of non-monetary benefits. Meanwhile, the Amount of Work scale item, Right
Amount-Too Little, loaded with the Promotion scale items.

The scale that appears to be problematic is the Amount of Work scale and, in particular, the items
Right amount-Too little and Right amount-Too much. It appears that respondents had difficulty
with these items depending on whether they perceived their own amount of work as either too
much or too little. Our difficulty, in turn, was in trying to develop corresponding items that
would allow for both too much and too little work, without having these as the opposite ends of a
bi-polar continuum. Given the results here, these two items will need to be reviewed for the
ongoing research version of the instrument.

Clearly, we need more research with larger samples in different countries to confirm the results
presented here, but, with the structure of the 15 subscales confirmed in all samples, the results
here suggest the pilot JSS-Global measure forms the basis of a useful final measure.

CONCLUSION

The overriding methodological issue in cross-cultural research is one of variable equivalence


(McDonald, 2000). The use of the typical attitude-type items appears to leave too much room for
cultural differences in interpretation. Given the results presented here for the JSS-Global
measure, it appears that the bi-polar adjective facet subscales work better than other approaches,
and constitute a cross-cultural measure of job satisfaction achieving a balance of ‘emics’ and
‘etics’.

For those who employ job satisfaction measures that use a facet approach, it is necessary to
review the content of their scales to ensure that the facets represented by their subscales remain
relevant in the contemporary work environment. It is also important to increase research efforts
to identify the extent to which facets of job satisfaction may differ from country to country and
culture to culture. It will also be necessary for scale developers to look to producing items and
scales that, as far as possible, have similar meanings in different cultures. Meanwhile, future
researchers need to be cognisant of concept equivalence, and begin by comparing the factor
structures of the different country responses rather than a comparison of scores per se.

REFERENCES

Boyacigiller, N. A. & Adler, N. J. 1991. The parochial dinosaur: Organizational science in a


global context. Academy of Management Journal, 16: 262-290.

Cronbach, L. J. 1951. Coefficient alpha and the internal structure of tests. Psychometrika, 16:
297-334.

Academy of Management Proceedings 2001 HR: B5


Hofstede, G. 1980. Culture's consequences. Beverly Hills, CA: Sage.

Hofstede, G. 1991. Cultures and organizations. New York: McGraw-Hill.

Hui, C. H., Yee, C., & Eastman, K. L., 1995. The relationship between individualism-
collectivism and job satisfaction. Applied Psychology: An International Review, 44: 276-
282.

Lamond, D. A. 1999. Things that matter: A comparison of job satisfaction in Hong Kong and
Australia. Proceedings of the 4th Asia Pacific Decision Sciences Institute Conference,
Shanghai, China, 9-12 June, 289-291.

Lamond, D. A. & Spector, P. 2000. Taking stock of the Job Satisfaction Survey: Its validity and
reliability in a different time and place. Proceedings of the Vth IFSAM World Congress,
Montreal, Canada, 8-11 July.

McDonald, G. 2000. Cross-cultural methodological issues in ethical research. Journal of


Business Ethics. 27(1/2):89-104.

Peng, T. K., Peterson, M. F., & Shyi, Y. P. 1991. Quantitative methods in cross-national
management research: Trends and equivalence issues. Journal of Organizational Behavior,
12: 87-107.

Peterson, M. F., Smith, P. B., Akande, A., Ayestaran, S., Bochner, S., Callan, V., Cho, N. G.,
Jesuino, J. C., D’Amorim, M., Francois, P. H., Hofmann, K., Koopman, P. L., Leung, K.,
Lim, T. K., Mortazavi, S., Munene, J., Radford, M., Ropo, A., Savage, G., Setiadi, B., Sinha,
T. N., Sorenson, R., & Viedge, C., 1995. Role conflict, ambiguity, and overload: A 21-nation
study. Academy of Management Journal, 38: 429-452.

Ryan, A. M., Chan, D., Ployhart, R. E. & Slade, L. A. 1999. Employee attitude surveys in a
multinational organization: Considering language and culture in assessing measurement
equivalence. Personnel Psychology, 52(1): 37-58.

Spector, P. E. 1997. Job satisfaction: Application, assessment, causes, and consequences.


Thousand Oaks, CA.: Sage.

Spector, P. E. & Wimalasiri, J., 1986. A cross-cultural comparison of job satisfaction dimensions
in the United States and Singapore. Applied Psychology: An International Review, 35:
147-158.

Stevens, J., 1996. Applied Multivariate Statistics for the Social Sciences (3rd Ed). Mahwah,
NJ: Lawrence Erlbaum Associates.

Trompenaars, F. & Hampden-Turner, C. 1998. Riding the waves of culture: Understanding


cultural diversity in global business (2nd Ed.) New York: McGraw-Hill.

Academy of Management Proceedings 2001 HR: B6

S-ar putea să vă placă și