Documente Academic
Documente Profesional
Documente Cultură
Performance
Author(s): Michael R. Wade and Saggi Nevo
Source: International Journal of Electronic Commerce, Vol. 10, No. 2 (Winter, 2005/2006),
pp. 123-146
Published by: Taylor & Francis, Ltd.
Stable URL: http://www.jstor.org/stable/27751186
Accessed: 11-03-2018 01:03 UTC
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms
Taylor & Francis, Ltd. is collaborating with JSTOR to digitize, preserve and extend access to
International Journal of Electronic Commerce
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
Development and Validation of a Perceptual
Instrument to Measure E-Commerce
Performance
ABSTRACT: This paper develops and validates an attitudinal scale that can measure the
performance of e-commerce operations both in "pure play" Internet firms and in on-line
components of multichannel firms. The measurement instrument is grounded in a resource
based view of the firm. The final instrument contains a 14-item perceptual measurement
scale. It was tested with data collected from a sample of 595 managers responsible for e
commerce operations. Psychometric testing of the instrument showed adequate construct
validity.
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
124 MICHAEL R. WADE AND SAGG1 NEVO
The measurement on-line variables has made a good deal of progress during
the relatively short lifespan of the commercial Internet. This is an area of sub
stantial interest to researchers and practicing managers. Some early research
ers developed instruments to measure an assortment of benefits associated
with e-commerce, such as browsing satisfaction and Web site effectiveness.
These efforts were largely undertaken without the guidance of strong theory,
however [61], and the instruments were too often not subjected to a rigorous
assessment of psychometric properties. It was difficult, therefore, to determine
the accuracy, reliability, and validity of the data that resulted from these mea
sures. As time went on, researchers began to take more care to validate and
incorporate theory into their measures. The study of e-commerce measure
ment developed into a multidisciplinary area, drawing on research from mar
keting, economics, psychology, strategy, and other fields [22, 32, 36, 56, 59].
Information Systems Research, the Journal of Management Information Systems,
and the International Journal of Electronic Commerce have recently published
works that explore or develop measures on a variety of Internet-related top
ics. Table 1 categorizes these papers, along with some other relevant works,
into a framework that illustrates their contribution and fit. These papers em
ploy a mix of theories, focal constructs, methods, and levels of analysis. To
gether, they form an extremely robust system of metrics through which to
understand firm performance in the context of electronic commerce. The
present paper is also displayed in Table 1 to allow for a concise view of its
placement and contribution relative to the literature.
The metrics developed in these papers include analyses of individual vari
ables, such as satisfaction, turnover, and enjoyment, Web site variables, such as
effectiveness and usability, and firm-level variables, such as performance, suc
cess, and competitive advantage. Among the theories utilized are transaction
cost economics, the technology acceptance model, expectation-disconfirmation
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
Source [36] [2] [28] [32]
[41] [53] [57] [58]
(continues)
Level of
Analysis
Org. Org.
Ind. Org. Ind. Ind. Ind. Org.
Theory development
Measurement
Theory extension
Perceptual
Framework/
Perceptual
objective objective
Perceptual
Perceptual
Method(s)
experts survey Third-Party Data
Evaluators/
Experts
Survey Survey Survey Survey Survey
N/A N/A
Jury
cycle
Architectural quality of
customer satisfaction
Web site usability
relations
objectives
satisfaction, intention
to return, frequency
variable advantage
(constructs)
Strategic
of use
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
ThisPaper
[59] [61] [60] [16] [52] [31]
Level of
Net. Org.
Objective Perceptual
Objective
Perceptual Perceptual
Perceptual Perceptual
Event study
Interviews
Survey Survey Survey Survey Survey
Experts
Survey
Technology organization
environment framework
Market value
performance
Dependent
E-commerce satisfaction E-commerce
E-bi NetOP
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 127
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
128 MICHAEL R. WADE AND SAGG1 NEVO
competing firms and explains how positive economic rents ensue under seem
ingly perfect competitive conditions [4, 7]. As noted by Wade and Hulland,
resources that are valuable and rare, and whose benefits can be appropriated
by the winning (or controlling) firm, confer a temporary competitive advan
tage [56]. This advantage can persist over a long period if the firm is able to
protect its resources against imitation, transfer, or substitution, thus leading to
a sustainable competitive advantage. For example, a firm's capability to learn
quickly and apply that knowledge to its operations on an ongoing basis may
provide it with a sustained competitive advantage over its close competitors
[44]. The presence of valuable and rare resources transforms commodity-like
inputs, such as e-commerce servers, Web programmers, and Web-development
tools, into unique and immobile resources that can sustain a competitive ad
vantage. Mata, Fuerst, and Barney described the RBV as follows:
The resource based view of the firm is based on two underlying asser
tions ... (1) that the resources and capabilities possessed by competing
firms may differ (resource heterogeneity); and (2) that these differences
may be long lasting (resource immobility). In this context, the concept of
a firm's resources and capabilities are [sic] defined very broadly, and
could certainly include the ability of a firm to conceive, implement, and
exploit valuable IT applications. [37, p. 491]
The resource-based view is useful to the present research for several rea
sons. First, the effectiveness of RBV theory has been amply demonstrated. It is
used extensively in e-commerce research, and especially in e-commerce mea
surement research [36, 53, 55, 56]. Second, empirical studies using the theory
strongly support the resource-based view [38,39]. Third, the firm level of analy
sis is applicable to this research. Fourth, the theory specifies quite precisely
the necessary characteristics of a dependent variable?the focus of this paper
(as will be explored further below). Fifth, the RBV has a long history of draw
ing on both subjective and objective measures.
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 129
Instrument Development
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
130 MICHAEL R. WADE AND SAGGI NEVO
Source
Question (adapted from)
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 131
The second source for the scale items was a set of semistructured interviews
with 52 senior managers from 42 firms. Each manager worked in the part of
the firm responsible for e-commerce operations. These executives were ex
pected to have knowledge about their firm's e-commerce performance. Zhuang
and Lederer iterated a similar assumption, and further claimed that using
data from e-commerce managers would increase the quality of the data used
in the survey [61]. About two-thirds of the interviewees were from the IS area,
while another quarter came from the marketing area. The remainder were
members of the general management team. The interviews were conducted
between April and June, 2001. Sixty-five percent of the interviews were con
ducted in person, the remainder by phone. The duration of each interview
averaged 50 minutes.
The sample was drawn from public and private sources and included large
and small firms in industrial, consumer, and service industries at different
geographical locations [20]. By obtaining a sample that reflected a diverse set
of respondent organizations, the researcher was able to obtain a rich set of
ideas and insights [42]. The goal of the sampling process was to target a wide
range of firms conducting e-commerce (different sizes, industries, geographic
regions, etc.). It was felt that by tapping a wide range of experiences, the re
search would gather a near-exhaustive set of measures from which to draw
the final validated scale. Four of the 22 items originated from the executive
interviews. Table 2 provides a full list of items with sources.
Instrument Refinement
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
232 MICHAEL R. WADE AND SAGGI NEVO
First, the survey was reviewed by three domain experts (practicing manag
ers in the area of e-commerce) known to the first author. The survey was also
shown to two researchers familiar with the subject area as well as with
instrument-development methods in general. At this stage, a number of
changes were made to the wording and format of the items. In addition, it
was suggested that four of the items be rationalized into two. These were the
items on cost and budget objectives (combined to "budget"), and deadline
and efficiency objectives (combined to "deadlines"). The resulting 20-item scale
was sent to 100 randomly chosen members of the sampling frame. Thirty-one
responses were received. While a high ratio of respondents to items (5 : 1 or
higher) is recommended and is often used during scale development, a lower
ratio is often used for item reduction and scale refinement [30, 54].
An exploratory factor analysis on the 20 items in the measurement scale
resulted in a two-factor solution (see Table 3). The eigenvalue of the first factor
was 11.4, and it was able to explain 81 percent of the variance in the dataset.
The second factor was below the normal cut-off (i.e., eigenvalue > 1) and
explained 15 percent of the variance in the data. Three items were cross-loaded
on the two factors. Thus, the meaning of these items?related to employee
objectives?was shared between both factors. In order to rationalize the scale
and avoid multicollinearity, the cross-loaded items were removed. The sec
ond factor contained three items once the cross-loaded items were removed.
The substance of these items, relating to strategy, investor satisfaction, and
infrastructure, did not appear to tap a single meaning. Because the second
factor contained only three incongruent items, had a low eigenvalue, explained
only a small proportion of the variance in the data, and was not hypothesized
or justified by theory, it was dropped from the scale. Thus, a single-factor
measurement scale for e-commerce performance remained with 14 items, as
shown in Table 3. Further confirmatory tests were conducted on the instru
ment with a much larger data set, as described below.
Instrument Confirmation
Once developed, the instrument was put through a process of instrument con
firmation in which it was rigorously tested using data from a large sample
(595 cases). At this point, the instrument was subjected to comprehensive tests
of reliability and validity. Drawing on Cook and Campbell, reliability is de
fined as the extent to which a measure is free from random error components
[14]. High reliability is understood to mean that repeated employment of the
measure would obtain consistent results. In turn, validity is the extent to which
a measure reflects only the desired construct without contamination from other
systematically varying constructs. Thus, validity requires reliability as a pre
requisite. The components of validity include content and construct valid
ity?the latter, in turn, can be divided into construct and discriminant validity
[14]. Each of these components will be addressed below.
Once the items were developed and refined, the instrument was tested on
a large sample of e-commerce managers. The instrument was used as part of
a study on the impact of IS on e-commerce performance. The methodology
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 133
employed was an on-line survey that closely followed the guidelines set out
by Schaefer and Dillman and by Cook, Heath, and Thompson [13, 46]. The
suitability and efficacy of Web survey methodologies deserve some discus
sion. Despite the relative newness of the Internet and the Web, on-line survey
methodologies have been extensively researched in the social sciences [18, 26,
55]. The benefits of on-line surveys include quick turnaround for results, re
spondent accuracy equivalent to or better than other survey methodologies,
interactive capabilities, distribution flexibility, low incremental cost, and a
smaller possibility of errors through multiple data entry [15, 47].
There are two main areas of concern when using the Web as the main method
of data collection: coverage bias and nonresponse bias. Coverage bias con
cerns the fact that some members of the population of interest have no chance
of being sampled. In an on-line survey, those with Internet access will be
over sampled, while all others are ignored. For this reason, mail and phone
surveys are used more often in survey research, despite the advantages of on
line methods. In this case, it was assumed that respondents would have ac
cess to both e-mail and the Web, and that using this medium to distribute the
questionnaire would not result in excessive coverage bias. This assumption
was based on the fact that target respondents were senior managers with
e-commerce retailers in or e-commerce divisions of multichannel retailers. As
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
134 MICHAEL R. WADE AND SAGGINEVO
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 135
Construct Validity
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
136 MICHAEL R. WADE AND SAGGINEVO
Residual
Square of Variance
Measurement Item Loadings Loadings (error)
Overall performance of on-line operations 0.9455 0.8940 0.1060
Return on financial assets 0.9469 0.8966 0.1034
Meeting budget objectives 0.9094 0.8270 0.1730
Meeting reliability objectives 0.8680 0.7534 0.2466
Meeting quality objectives 0.8419 0.7088 0.2913
Meeting user satisfaction objectives 0.8469 0.7172 0.2828
Meeting service objectives 0.8544 0.7230 0.2700
Meeting major deadlines 0.8905 0.7930 0.2070
Meeting revenue deadlines 0.8673 0.7522 0.2478
Meeting overall objectives 0.9725 0.9458 0.0543
Success compared to competitors 1 0.9276 0.8604 0.1395
Success compared to competitors 2 0.9332 0.8709 0.1291
Sustaining of performance over short term
(<1 yr) 0.9034 0.8161 0.1839
Sustaining of performance over longer term
(2-5 yrs) 0.9239 0.8536 0.1465
Sum 12.6314 11.4190 2.5812
Fornell's Internal consistency coefficient 0.9841
Cronbach's alpha (standardized) 0.9824
Average variance extracted (AVE) 0.8157
Square root of average variance extracted 0.9031
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 137
IS Management
IS technical skills 0.82 0.24 0.28 0.23
IS infrastructure 0.84 0.28 0.32 0.12
IS cost effectiveness 0.81 0.41 0.21 0.27
Top management
commitment 0.73 0.28 0.15 0.14
IS management 0.69 0.04 0.31 0.32
Data security 0.84 0.16 0.23 0.18
Systems Design
IS development 0.28 0.9 0.31 0.22
Systems integration 0.18 0.85 0.17 0.15
IS Adaptive
Opportunity recognition 0.28 0.29 0.91 0.67
Market responsiveness 0.11 0.12 0.92 0.43
IS planning 0.2 0.33 0.83 0.44
IS business partnerships 0.37 0.21 0.93 0.38
External relations 0.22 0.31 0.93 0.38
Collaboration support 0.19 0.08 0.89 0.52
Internal intelligence 0.37 0.17 0.93 0.41
E-commerce Success
Meet deadlines 0.39 0.03 0.48 0.89
Meet budget objectives 0.23 0.12 0.46 0.91
Meet overall objectives 0.270.14 0.51 0.97
Meet quality objectives 0.31 0.09 0.51 0.84
Meet reliability objectives 0.32 0.11 0.45 0.87
Meet revenue objectives 0.4 0.07 0.57 0.87
Meet service objectives 0.25 0.05 0.47 0.86
Meet user satisfaction
objectives 0.36 0.1 0.41 0.85
Overall performance 0.25 0.24 0.49 0.95
Overall ROI 0.34 0.21 0.38 0.95
Comparison 1 0.29 0.12 0.51 0.93
Comparison 2 0.26 0.12 0.53 0.93
Sustainability 1 0.3 0.25 0.57 0.92
Sustainability 2 0.3 0.24 0.58 0.9
matrix shows how each of the measures loads on the entire model constructs
by correlating the construct factor scores with actual data values on a case-by
case basis. To indicate good discriminant validity, items should load highly
on the intended construct but at the same time not load highly on other con
structs [6].
Table 5 shows that the e-commerce performance items load highly on the
intended construct but do not load as highly on the other constructs. In addi
tion, items from the other constructs in the model do not load as highly on the
e-commerce performance construct as the intended items. This finding sug
gests that the scale exhibits adequate discriminant validity. Together, the con
vergent and discriminant validities confirm that the instrument exhibits
satisfactory construct validity [14].
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
138 MICHAEL R. WADE AND SAGGI NEVO
Summary of Findings
Table 6 shows descriptive statistics for each of the item measures taken from
the full dataset. The mean value for each of the items was close to 3 (average
of 3.12 on a 5-point Likert scale), while the median value for each item was 3.
Thus, for benchmarking purposes, values above 3 indicate above-average
evaluations, while those below 3 indicate negative evaluations for all items in
the instrument. The average standard deviation of the items was 1.1, with no
single item having a standard deviation greater than 1.3 or less than 1. Mea
surements of skewness indicated that responses were normally distributed
around the mean (all skewness measures were between -0.3 and 0.1).
In summary, measurement items were drawn from the relevant research
literature and augmented by interviews with 52 e-commerce managers. This
process resulted in a 22-item scale. The scale was then reduced to 14 items
through a review by domain experts and a process of exploratory factor analysis
on pilot data. Once developed, the scale was tested using survey data from
595 e-commerce managers. Results of various statistical tests of convergent
and discriminant validity suggested that the measurement was reliable and
valid, and exhibited strong psychometric properties. The exact wording of
the final items is shown in the Appendix.
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
Skewness
Max Skewness SE 1.223 1 5 0.029 0.104 1.235 1 5 0.135 0.104 1.279 1 5 0.091 1.163
0.104 1 5 0.007 0.104 1.275 1 5 0.072 0.104
Min
SD
M MSE Mdn
Competitors 2 3.18Sustaining
0.051 31 2.96 0.054 3 Sustaining 2 3.16 0.052 3
Competitors 1 3.3 0.05 3
Objectives 3.09 0.05 3
Reliability 3.19 0.052 3 Satisfaction 3.42 0.05 3
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
140 MICHAEL R. WADE AND SAGGI NEVO
and validity of the instrument hold in specific contexts, such as for the evalua
tion of ERP or CRM systems, or for general organizational performance.
The primary aim of this study was to develop and validate a perceptual mea
surement instrument of e-commerce performance at the level of the firm. A
multi-phase process of measurement development, refinement, and confir
mation was used in the development of a 14-item scale. The measure is con
ceptually grounded in the resource-based view (RBV) of the firm and may be
used by practitioners and academics alike to evaluate e-commerce perfor
mance. This paper represents an initial step toward being able to reliably
measure the performance of firms engaged in e-commerce, whether on a full
time basis or as part of a multichannel strategy. Clearly, more work is neces
sary to establish the generalizability of the instrument.
Contribution to Research
Unlike many e-commerce measurement efforts in the past, this study pro
vides an instrument with strong ties to theory. The study largely supports,
and is supported by, the resource-based view of the firm. Although they do
not exist as separate and distinct factors in the model, RBV-inspired elements,
such as absolute (direct) performance, performance compared to objectives,
comparative (i.e., relative to competition) performance, and sustained perfor
mance, are all elements of the instrument. The RBV is suitable for the analysis
of e-commerce performance because the tools and processes used to develop
e-commerce infrastructure and operations are available to all firms, yet some
manage to transform them into unique capabilities that are hard to imitate
and thus form the source of a sustainable competitive advantage.
The proposed instrument is the result of an attempt to fill a gap in the lit
erature by providing a perceptual measure of e-commerce performance at the
level of the firm. Despite some limitations discussed below, the steps taken to
develop, refine, and confirm the instrument have resulted in a valid and reli
able measure of firm-level e-commerce performance. Researchers can use the
instrument presented in this paper to measure the performance of e-commerce
operations in conjunction with objective measures (if available), or the instru
ment can be used by itself. One might add that the results reported in this
paper complement the results reported by Zhu, Kraemer, Xu, and Dedrick
[60]. The present study and their study report similar results in support of the
use of perceptual firm-level measures but rely on different theories (technology
organization-environment vs. resource-based view) and focus on different
industries (financial services vs. retail).
The instrument has been designed for use by researchers wishing to use a
survey data-collection methodology. The full 14-item measure provides an
indication of overall e-commerce performance. Many conceptual models seek
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 141
Contribution to Practice
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
142 MICHAEL R. WADE AND SAGGINEVO
NOTES
REFERENCES
1. Adams, C; Kapashi, N.; Neely, A.; and Marr, B. Managing the measures:
Measuring eBusiness Performance. White paper, Accenture Consulting/
Cranfield University, 2001.
2. Agarwal, R., and Venkatesh, V. Assessing a firm's Web presence: A
heuristic evaluation procedure for the measurement of usability Information
Systems Research, 13, 2 (June 2002), 168-186.
3. Agresti, A. Modelling patterns of agreement and disagreement. Statisti
cal Methods in Medical Research, 1 (1992), 201-218.
4. Amit, R., and Schoemaker, P.J.H. Strategic assets and organizational rent.
Strategic Management Journal, 14, 1 (1993), 33-46.
5. Armstrong, J.S., and Overton, T.S. Estimating nonresponse bias in mail
surveys. Journal of Marketing Research, 14, 3 (1977), 396^02.
6. Barclay, D.; Higgins, C; and Thompson, R. The partial least squares
(PLS) approach to causal modeling: Personal computer adoption and use as
an illustration. Technology Studies, 2, 2 (1995), 285-309.
7. Barney, J. Firm resources and sustained competitive advantage. Journal of
Management, 17, 1 (1991), 99-120.
8. Barua, A.; Kriebel H.C.; and Mukhopadhyay, T. Information technology
and business value: An analytic and empirical investigation. Information
Systems Research, 6,1 (1995), 3-23.
9. Bharadwaj, A.S.; Bharadwaj, S.G.; and Konsynski, B.R. Information
technology effects on firm performance as measured by Tobin's Q. Manage
ment Science, 45, 7 (1999), 1008-1024.
10. Campbell TD., and Fiske, W.D. Convergent and discriminant validation
by the multitrait-multimethod matrix. Psychological Bulletin, 56 (1959), 81-105.
11. Capron, L., and Hulland, J. Redeployment of brands, sales forces, and
general marketing management expertise following horizontal acquisitions:
A resource-based view. Journal of Marketing, 63 (April 1999), 41-54.
12. Chin, W.; Gopal, A.; and Salisbury, W.D. Advancing the theory of
adaptive structuration: The development of a scale to measure faithfulness
of appropriation. Information Systems Research, 8, 4 (1997), 342-367.
13. Cook, C; Heath, F.; and Thompson, R. A meta-analysis of response rates
in Web- or Internet-based surveys. Educational and Psychological Measure
ment, 60, 6 (2000), 821-836.
14. Cook, T.D., and Campbell, D.T. Quasi-Experimentation: Design and
Analysis Issues for Field Settings. Chicago: Rand-McNally, 1979.
15. Couper, M. Web surveys: A review of issues and approaches. Public
Opinion Quarterly, 64 (2000), 464-494.
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 143
16. Dehning, B.; Richardson, V.J.; Urbaczewski, A.; and Wells, J.D. Reexam
ining the value relevance of e-commerce initiatives. Journal of Management
Information Systems, 21,1 (summer 2004), 55-82.
17. Dewan, R.; Jing, B.; and Seidmann, A. Adoption of Internet-based
product customization and pricing strategies. Journal of Management Informa
tion Systems, 17, 2 (2000), 9-28.
18. Dillman, D.A., and Bowker, D.K. The Web questionnaire challenge to
survey methodologists. In U.-D. Reips and M. Bosnjak (eds.), Dimensions of
Internet Science. Lengerich: Pabst Science, 2001, pp. 159-178.
19. Dommeyer, C, and Moriarty, E. Comparing two forms of an e-mail
survey: Embedded vs. attached. International Journal of Market Research, 42,1
(2000), 39-50.
20. Glaser, B., and Strauss, A. The Discovery of Grounded Theory. Chicago:
Aldine, 1967.
21. Hair, J.E; Anderson, R.E.; Tatham, R.L.; and Black, W.C. Multivariate Data
Analysis. Upper Saddle River, NJ: Prentice-Hall, 1998.
22. Hoffman, D., and Novak, T. Marketing in hypermedia computer
mediated environments: Conceptual foundations. Journal of Marketing, 60
(1996), 50-68.
23. Jaworski, B.J., and Kohli, A.K. Market orientation: Antecedents and
consequences. Journal of Marketing, 57 (July 1993), 53-70.
24. Kaufman, R.J., and Waiden, E.A. Economics and electronic commerce:
Survey and directions for research. International Journal of Electronic Com
merce, 5, 4 (summer 2001), 5-116.
25. Kay, J. The international transferability of the firm's advantage. Business
Strategy Review, 4 (1993), 17-37.
26. Kaye, B.K., and Johnson, T.J. Research methodology: Taming the cyber
frontier?Techniques for improving online surveys. Social Science Computer
Review, 17, 3 (1999), 323-337.
27. Keeney, R.L. The value of Internet commerce to the customer. Manage
ment Science, 45, 4 (1999), 533-542.
28. Kim, J.; Lee, J.; Han, K.; and Lee, M. Business as buildings: Metrics for
architectural quality of Internet business. Information Systems Research, 13, 2
(June 2002), 205-223.
29. Kleist, VF. An approach to evaluating e-business information systems
projects. Information Systems Prontiers, 5, 3 (2003), 249-263.
30. Ko, D.; Kirsch, L.; and King, W. Antecedents of knowledge transfer from
consultants to clients in enterprise system implementations. MIS Quarterly,
29, 1 (2005), 59-85.
31. Kohli, R.; Devaraj, S.; and Mahmood, M.A. Understanding determinants
of online consumer satisfaction: A decision process perspective. Journal of
Management Information Systems, 21,1 (summer 2004), 115-135.
32. Koufaris, M. Applying the technology acceptance model and flow
theory to online consumer behavior. Information Systems Research, 13, 3
(September 2002), 239-254.
33. Kumar N.; Stern, L.W.; and Anderson, J.W. Conducting inter
organizational research using key informants. Academy of Management
Journal, 36 (1993), 1633-1651.
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
244 MICHAEL R. WADE AND SAGGI NEVO
34. Lay, P. Beware of the cost benefit model for IS project evaluation. Journal
of Systems Management, 36, 6 (1985), 30-35.
35. Lederer, A.; Dinesh, A.M.; and Sims, K. The link between information
strategy and electronic commerce. Journal of Organizational Computing and
Electronic Commerce, 7, 1 (1997), 17-34.
36. Lederer, A.L.; Mirchandani, D.A.; and Sims, K. The search for strategic
advantage from the World Wide Web. International Journal of Electronic
Commerce, 5, 4 (summer 2001), 117-133.
37. Mata, F.G.; Fuerst, W.L.; and Barney, J.B. Information technology and
sustained competitive advantage: A resource-based analysis. MIS Quarterly,
19, 4 (December 1995), 487-505.
38. McGrath, R.G.; MacMillan, I.C.; and Venkataraman, S. Defining and
developing competence: A strategic process paradigm. Strategic Management
Journal, 16 (1995), 251-275.
39. Miller, D., and Shamsie, J. The resource-based view of the firm in two
environments: The Hollywood firm studios from 1936-1965. Academy of
Management Journal, 39, 3 (1996), 519-543.
40. Orli, R., and Tom, J. If it's worth more than it costs, buy it! Journal of
Information Systems Management, 4, 3 (1987), 85-89.
41. Palmer, J.W. Web site usability, design, and performance metrics. Infor
mation Systems Research, 13, 2 (June 2002), 151-167.
42. Parkhe, A. "Messy" research, methodological predispositions, and
theory Academy of Management Review, 18, 2 (1993), 227-268.
43. Powell, T.C., and Dent-Micallef, A. Information technology as competi
tive advantage: The role of human, business, and technology resources.
Strategic Management Journal, 18,5 (1997), 375-405.
44. Prahalad, C., and Hamel, D. The core competence of the corporation.
Harvard Business Review, 68, 3 (1990), 79-91.
45. Quinn, J.B., and Baily, M.N. Information technology: Increasing produc
tivity in services. Academy of Management Executive, 8, 3 (1994), 28-51.
46. Schaefer, D.R., and Dillman, D.A. Development of a standard e-mail
methodology: Results of an experiment. Public Opinion Quarterly, 62, 3
(1998), 378-393.
47. Schuldt, B., and Totten, J. Electronic mail vs. mail survey response rates.
Marketing Research, 6 (winter 1994), 36-39.
48. Sheehan, K.B., and McMillan, S.J. Response variation in e-mail surveys:
An exploration. Journal of Advertising Research, 39, 4 (1999), 45-54.
49. Simon, H.A. The New Science of Management Decision. Englewood Cliffs,
NJ: Prentice Hall, 1977.
50. Steinheid, C; Bouwman, H.; and Adelaar T. The dynamics of click-and
mortar electronic commerce: Opportunities and management strategies.
International Journal of Electronic Commerce, 7,1 (fall 2002), 93-119.
51. Straub, D. Validating instruments in MIS research. MIS Quarterly, 13,2
(1989), 147-168.
52. Straub, D.; Rai, A.; and Klein, R. Measuring firm performance at the
network level: A nomology of the business impact of digital supply net
works. Journal of Management Information Systems, 21,1 (summer 2004),
83-114.
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 145
53. Torkzadeh, G., and Dhillon, G. Measuring factors that influence the
success of Internet commerce. Information Systems Research, 13,2 (June 2002),
187-204.
54. Towler, A., and Dipboye, R. Development of a learning style orientation
measure. Organizational Research Methods, 6 (2003), 216-235.
55. Vehovar, V.; Manfreda, K.L.; and Batagelj, Z. Sensitivity of electronic
commerce to the survey instrument. International Journal of Electronic Com
merce, 6, 1 (fall 2001), 31-51.
56. Wade, M., and Hulland, J. Review: The resource-based view and infor
mation systems research: Review, extension, and suggestions for future
research. MIS Quarterly, 28,1 (March 2004), 107-142.
57. Wheeler, B.C. NEBIC: A dynamic capabilities theory for assessing Net
enablement. Information Systems Research, 13, 2 (June 2002), 125-146.
58. Zahra, S., and George, G. Wheeler's Net-enabled business innovation
cycle and a strategic entrepreneurship perspective on the evolution of
dynamic capabilities. Information Systems Research, 13, 2 (June 2002),
147-150.
59. Zhu, K., and Kraemer, K.L. E-commerce metrics for Net-enhanced
organizations: Assessing the value of e-commerce to firm performance in
the manufacturing sector. Information Systems Research, 13,3 (September
2002), 275-295.
60. Zhu, K.; Kraemer, K.L.; Xu, S.; and Dedrick, J. Information technology
payoff in e-business environments: An international perspective on value
creation of e-business in the financial services industry. Journal of Manage
ment Information Systems, 21,1 (summer 2004), 17-54.
61. Zhuang, Y., and Lederer, A.L. An instrument for measuring the business
benefits of e-commerce retailing. International Journal of Electronic Commerce,
7, 3 (spring 2003), 65-99.
62. Zmud, R., and Boynton, A. Survey measures and instruments in MIS:
Inventory and appraisals. Harvard Business School Research Colloquium
(1991), 149-185.
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
good n/a better n/a better n/a well n/a
very much much very
not much
worse
much
worse
notwell
good
How would you rate the overall performance of your on-line operations How do you think your customers would rate your on-line operations
To what extent have you been able to sustain the performance of your To what extent have you been able to sustain the performance of your
How would you evaluate the return on investment for your on-line
Compared with your main competitors, how would you rate the
Assess the performance of your on-line operations compared to
meeting meeting
meeting service
meeting
quality overall
major objectives?
objectives?
deadlines?
objectives?
24. On-line performance over time
operations?
Appendix
This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms