Sunteți pe pagina 1din 25

Development and Validation of a Perceptual Instrument to Measure E-Commerce

Performance
Author(s): Michael R. Wade and Saggi Nevo
Source: International Journal of Electronic Commerce, Vol. 10, No. 2 (Winter, 2005/2006),
pp. 123-146
Published by: Taylor & Francis, Ltd.
Stable URL: http://www.jstor.org/stable/27751186
Accessed: 11-03-2018 01:03 UTC

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

Taylor & Francis, Ltd. is collaborating with JSTOR to digitize, preserve and extend access to
International Journal of Electronic Commerce

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
Development and Validation of a Perceptual
Instrument to Measure E-Commerce
Performance

Michael R. Wade and Saggi Nevo

ABSTRACT: This paper develops and validates an attitudinal scale that can measure the
performance of e-commerce operations both in "pure play" Internet firms and in on-line
components of multichannel firms. The measurement instrument is grounded in a resource
based view of the firm. The final instrument contains a 14-item perceptual measurement
scale. It was tested with data collected from a sample of 595 managers responsible for e
commerce operations. Psychometric testing of the instrument showed adequate construct
validity.

KEY WORDS AND PHRASES: E-commerce, organizational performance, reliability,


resource-based view, scale development, validity.

The measurement of firm performance, whether at the organization or divi


sion level, has proven to be one of the more enduring topics in management
research. Objective measures, such as return on assets (ROA), return on eq
uity (ROE), return on investment (ROI), sales growth, and inventory turns,
are used by many firms as an externally validated assessment of performance.
However, objective measures of performance may not be readily available, or
may miss important social and environmental influences and therefore not
tell the complete story of organizational health. For example, objective data
may not be able to easily measure the strength of corporate culture, customer
or employee satisfaction, legal or political influences, or other organizational
factors that some researchers claim are important intermediaries between
capabilities/resources and firm performance (e.g., [8]). Taken in the aggre
gate, subjective or perceptual measures of firm performance, such as surveys
of stakeholders (i.e., employees, customers, investors), can provide a broad
indication of a company's health [45]. Provided that perceptual measures are
valid and relatively easy to administer, they can be a valuable tool for a firm's
managers [36, 49, 61].
The measurement of performance for firms engaging in e-commerce adds a
further layer of complexity [24, 29].1 Objective measures of performance, at
least in the area of financial performance, are often not available. Return on
assets or investment may be of limited use when returns are negative for ex
tended periods. For example, MSNBC.com, an on-line portal, made its first
quarterly profit in the second quarter of 2004?eight years after it went live.
Equity-based measures, such as share growth, market capitalization, or Tobin's
Q [9], are not appropriate for the large majority of multichannel firms engag
ing in e-commerce activity. In many instances, objective measures may repre
sent an incomplete (or unviable) option for evaluating e-commerce performance
[29]. The problems of measurement become more acute in the case of the many
multichannel firms that do not separate their on-line and off-line costs and
revenues. In the absence of objective metrics, the use of perceptual measures
International Journal of Electronic Commerce / Winter 2005-6, Vol. 10, No. 2, pp. 123-146.
Copyright ? 2006 M.E. Sharpe, Inc. All rights reserved.
1086-4415/2006 $9.50 + 0.00.

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
124 MICHAEL R. WADE AND SAGG1 NEVO

has greater importance, particularly in evaluating e-commerce initiatives [17,


24, 50].
This paper introduces and validates a perceptual scale to measure
e-commerce performance. So far as is known, no one else has produced a vali
dated perceptual scale for measuring e-commerce performance at the level of
the firm.2 The instrument described in this paper is suitable for use by academ
ics and practitioners alike to measure e-commerce performance as a comple
ment to objective measures, or in situations where objective measures are not
readily available. It can also be used in special circumstances, as when e-com
merce affects nonmonetary factors like customer satisfaction and competitive
ness. This measurement instrument is not restricted to on-line "pure play" firms
(relatively few of which exist) but is designed for use by multichannel firms to
measure the performance of their on-line operations. In sum, the instrument
developed and validated in this paper can become an important component in
the arsenal of mensural tools available to researchers and managers hoping to
learn more about the performance of e-commerce initiatives.

Review of the Literature

The measurement on-line variables has made a good deal of progress during
the relatively short lifespan of the commercial Internet. This is an area of sub
stantial interest to researchers and practicing managers. Some early research
ers developed instruments to measure an assortment of benefits associated
with e-commerce, such as browsing satisfaction and Web site effectiveness.
These efforts were largely undertaken without the guidance of strong theory,
however [61], and the instruments were too often not subjected to a rigorous
assessment of psychometric properties. It was difficult, therefore, to determine
the accuracy, reliability, and validity of the data that resulted from these mea
sures. As time went on, researchers began to take more care to validate and
incorporate theory into their measures. The study of e-commerce measure
ment developed into a multidisciplinary area, drawing on research from mar
keting, economics, psychology, strategy, and other fields [22, 32, 36, 56, 59].
Information Systems Research, the Journal of Management Information Systems,
and the International Journal of Electronic Commerce have recently published
works that explore or develop measures on a variety of Internet-related top
ics. Table 1 categorizes these papers, along with some other relevant works,
into a framework that illustrates their contribution and fit. These papers em
ploy a mix of theories, focal constructs, methods, and levels of analysis. To
gether, they form an extremely robust system of metrics through which to
understand firm performance in the context of electronic commerce. The
present paper is also displayed in Table 1 to allow for a concise view of its
placement and contribution relative to the literature.
The metrics developed in these papers include analyses of individual vari
ables, such as satisfaction, turnover, and enjoyment, Web site variables, such as
effectiveness and usability, and firm-level variables, such as performance, suc
cess, and competitive advantage. Among the theories utilized are transaction
cost economics, the technology acceptance model, expectation-disconfirmation

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
Source [36] [2] [28] [32]
[41] [53] [57] [58]
(continues)

Level of
Analysis
Org. Org.
Ind. Org. Ind. Ind. Ind. Org.

Theory development
Measurement
Theory extension

Perceptual and Perceptual and

Perceptual
Framework/
Perceptual
objective objective
Perceptual
Perceptual

Method(s)
experts survey Third-Party Data
Evaluators/

Experts
Survey Survey Survey Survey Survey
N/A N/A
Jury

Net-enabled business innovation Net-enabled business innovation

Microsoft usability guidelines

Technology acceptance model

Theory/framework Human-computer interaction Business as a building


Consumer behavior "Value propositions" [27]
Usability principles
cycle + strategic behavior

Resource-based 1 Media richness

Table 1 Summary of the Literature on the Measurement of Net-Enabled Organizations.

cycle

access, communication efficiency,

Information quality, information

Perceived usefulness, perceived


system development efficiency, firmness, convenience, delight,
Independent variable Means objectives, fundamental

ease-of-use, skills, enjoyment,


business efficiency, customer
control, challenges, perceived

Architectural quality of
customer satisfaction
Web site usability

(constructs) Web Presence:


concentration Web site design Path dependence
e-commerce:

relations

objectives

Web site usability Web site success: E-commerce success


Customer loyalty

satisfaction, intention
to return, frequency

Dependent Intentions to return

variable advantage

(constructs)
Strategic

of use

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
ThisPaper
[59] [61] [60] [16] [52] [31]

Level of
Net. Org.

Org. Org. Org.


Org. Org. Ind.

Method(s) Measurement Analysis Source

Objective Perceptual
Objective
Perceptual Perceptual
Perceptual Perceptual

Event study
Interviews
Survey Survey Survey Survey Survey
Experts
Survey

Resource-based view, dynamic Transaction cost economics,


Decision-making model [49]

Technology organization
environment framework

capabilities perspective Abnormal returns Network analysis


Resource-based view

resources, competition intensity,


Technology readiness, firm size,

Information sharing, dependence

type of e-commerce initiative


E-commerce announcements,
market expansion, customer

service, back-end efficiency,


Capabilities of net-enabled Intelligence, design, choice,

global scope, financial


Benefits (either DVs or IVs):

cost savings, time savings


IT infrastructure, e-commerce
cost reduction, inventory

capabilitye-commerce regulatory environment


(constructs) (constructs) Theory/framework
variable Independent variable
organizations:
management

Table 1 (continued) Firm performance


usmess value

Market value
performance

Dependent
E-commerce satisfaction E-commerce

E-bi NetOP

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 127

theory, the dynamic capabilities perspective, and the resource-based view of


the firm. Units of analysis are explored at the individual, firm, industry, and
network levels. While the studies noted in Table 1 provide a rich foundation for
studying electronic commerce performance, a few gaps remain.
Few of the papers shown in Table 1 are aimed at measuring firm-level per
formance. Those that are tend to use publicly accessible, secondary sources of
data [59]. As noted earlier, such data, while extremely valuable, may not present
a complete picture of firm performance, especially in an e-commerce environ
ment, or may not be readily available. Some studies utilized perceptual mea
sures, but did not use the resulting data to measure performance at the level
of the firm, at least not directly [31]. One recent study used perceptual data to
measure firm-level performance in the financial services industry [60].
While very few firm-level perceptual measures of e-commerce performance
are put forward in these papers, some researchers offer guidance on what
characteristics such an instrument should possess. The fulfillment of objec
tives and the realization of targets, for example, were judged by Torkzadeh
and Dhillon to be important considerations [53]. A comparative advantage
over peers, in addition to absolute performance, was deemed relevant by
Lederer, Mirchandani, and Sims [36]. Zahra and George recommended that
measurement scales incorporate a set of firm-level variables [58]. Finally, a
number of papers suggested that the guidance of strong theory was essential
in the measurement development process [50,61]. In sum, many of the papers
argue for a multi-dimensional view of e-commerce performance. This paper
builds upon these suggestions and recommendations.
The instrument developed in this paper addresses the aforementioned gaps
as well as others in the current e-commerce performance-measurement litera
ture. Previous work suggests that a combination of qualitative and quantita
tive measurement approaches provides the best overall indication of a firm's
e-commerce performance (e.g., [29]). The instrument presented in this paper
was developed using both approaches. The development, refinement, and
confirmation of the instrument used semistructured interviews with informed
participants and confirmatory data from a large sample survey. Despite the
large amount of published high-quality work on electronic commerce mea
surement, it appears that no validated perceptual measure of e-commerce
performance at the level of the firm has yet been put forward. The present
paper addresses this gap by developing, refining, and confirming a multi
dimensional perceptual measurement instrument to evaluate e-commerce
performance.

Resource-Based View of the Firm

The discussion that follows is conceptually grounded in the resource-based


view of the firm (RBV). The RBV is an influential theory in the fields of strategy
and marketing, and is growing in other areas, such as information systems (IS)
and organizational behavior. It argues that firms possess resources, a subset of
which enables them to achieve competitive advantage, and a subset of those
leads to superior long-term performance [7]. This accounts for differences among

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
128 MICHAEL R. WADE AND SAGG1 NEVO

competing firms and explains how positive economic rents ensue under seem
ingly perfect competitive conditions [4, 7]. As noted by Wade and Hulland,
resources that are valuable and rare, and whose benefits can be appropriated
by the winning (or controlling) firm, confer a temporary competitive advan
tage [56]. This advantage can persist over a long period if the firm is able to
protect its resources against imitation, transfer, or substitution, thus leading to
a sustainable competitive advantage. For example, a firm's capability to learn
quickly and apply that knowledge to its operations on an ongoing basis may
provide it with a sustained competitive advantage over its close competitors
[44]. The presence of valuable and rare resources transforms commodity-like
inputs, such as e-commerce servers, Web programmers, and Web-development
tools, into unique and immobile resources that can sustain a competitive ad
vantage. Mata, Fuerst, and Barney described the RBV as follows:
The resource based view of the firm is based on two underlying asser
tions ... (1) that the resources and capabilities possessed by competing
firms may differ (resource heterogeneity); and (2) that these differences
may be long lasting (resource immobility). In this context, the concept of
a firm's resources and capabilities are [sic] defined very broadly, and
could certainly include the ability of a firm to conceive, implement, and
exploit valuable IT applications. [37, p. 491]
The resource-based view is useful to the present research for several rea
sons. First, the effectiveness of RBV theory has been amply demonstrated. It is
used extensively in e-commerce research, and especially in e-commerce mea
surement research [36, 53, 55, 56]. Second, empirical studies using the theory
strongly support the resource-based view [38,39]. Third, the firm level of analy
sis is applicable to this research. Fourth, the theory specifies quite precisely
the necessary characteristics of a dependent variable?the focus of this paper
(as will be explored further below). Fifth, the RBV has a long history of draw
ing on both subjective and objective measures.

Measurement of E-Commerce Performance

Proper validation of measurement instruments is a necessary condition of good


empirical research. Poor instruments may lead to poor data, which in turn lead
to poor conclusions. Many instruments are poor because they were not ad
equately validated, or were not validated at all [62]. Occasionally, good instru
ments that are modified and not subsequently revalidated become poor [51].
Poor measurement instruments hold a higher possibility of providing incor
rect results. Unreliable instruments may be accurate in one setting and inaccu
rate in another. Instruments with poor construct validity may tap multiple
meanings or systematically bias the results in one direction. Instruments with
poor content validity may accurately address only part of the intended area of
interest. In short, no measurement instrument is perfect, but proper validation
can increase the odds of obtaining consistent and accurate findings. Drawing
on Chin, Gopal, and Salisbury, this paper divides the instrument-validation
process into three phases: instrument development, instrument refinement, and
instrument confirmation [12].

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 129

Instrument Development

Instrument development concerns the origin of measurement items. Measure


ment items typically originate from two sources: the research literature and
exploratory research [14]. The research literature often contains suggestions
for appropriate items to be included in a scale. In the event that the literature
is incomplete or unhelpful, or if the topic is substantially new, the literature
may be augmented by exploratory research. This typically involves interviews
with experts and practitioners in the chosen field who may suggest new scale
items or modifications to existing ones. Semistructured interviews add depth
to a study and lift off some of the restrictions associated with purely quantita
tive methods [29, 61]. In the present study, measurement items were drawn
both from the literature and from interviews with e-commerce managers.
Table 2 shows the full set of measurement items (prior to psychometric test
ing) along with the source for each item.

Items Developed from the Literature

Two streams of literature were studied: the literature on e-commerce perfor


mance (reviewed earlier) and the literature on the resource-based view of the
firm. The RBV literature provides some guidance for the development of a
dependent variable [7]. As Wade and Hulland noted, the main dependent vari
able of the resource-based view of the firm, sustainable competitive advan
tage (SCA), should possess the following attributes: "(1) it should provide an
assessment of performance, (2) it should incorporate a competitive assessment
element, and (3) it should address the notion of performance over time" [56,
p. 129].
Consistent with their recommendations, the construct developed for the
present research possesses the aforementioned attributes. First, a dependent
variable should measure absolute or direct performance (how well is the firm
doing?). This element can be measured subjectively by using questions about
firm success and performance. Two items were drawn from the literature [1,
25]. An additional item was added based on the interviews (a complete list of
items is included in Table 2). Thus, three items were developed to measure the
direct-performance construct. Another indicator of performance is how well a
firm is doing compared to its goals or objectives. This element is important
because it can provide a detailed and multifaceted description of success not
necessarily bound by financial considerations. For example, performance can
be compared to expected levels of service, user satisfaction, reliability, revenue,
and so on [38]. Fourteen items were found to measure performance compared
to objectives [1,7,38]. An additional item was added from the interviews. Thus,
fifteen items were developed to measure performance compared to objectives.
The second element is comparative performance (how well is the firm do
ing compared to its main competitors?). While a firm may be succeeding (or
failing) in absolute terms, it may still be lagging (or leading) the competitive
field. This is essentially a benchmarking measure, where performance is com
pared to a core set of competitors or an industry norm. Thus, this element

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
130 MICHAEL R. WADE AND SAGGI NEVO

Source
Question (adapted from)

Overall e-commerce performance.


a. How would you evaluate the performance of your e-commerce strategy? [1 ]
b. How would you evaluate the return on investment for your on-line
operations? [25]
c. How would you rate the overall performance of your on-line operations? Interviews
E-commerce performance compared to objectives.
Assess the performance of your on-line operations compared to expectations
on each of the following dimensions:
a. Meeting budget objectives? [38]
b. Meeting reliability objectives? [38]
c. Meeting quality objectives? [38]
d. Meeting user satisfaction objectives? [38]
e. Meeting employee satisfaction objectives? [1]
f. Meeting investor satisfaction objectives? [1]
g. Meeting service objectives? [38]
h. Meeting major deadlines? [38]
i. Meeting efficiency objectives? [38]
j. Meeting employee skill objectives? [1]
k. Meeting staffing objectives? [38]
I. Meeting technology infrastructure objectives? [1]
m. Meeting cost objectives? [38]
n. Meeting revenue objectives? Interviews
o. Meeting overall objectives? [38]
E-commerce performance compared to main competitors.
a. Compared with your main competitors, how would you rate the
performance of your on-line operations? [34, 35]
b. How do you think your customers would rate your on-line operations
compared to those of your main competitors? [40]
E-commerce performance over time.
a. To what extent have you been able to sustain the performance of
your on-line operations over the short term (< 1 year)? Interviews
b. To what extent have you been able to sustain the performance of
your on-line operations over the longer term (2-5 years)? Interviews

Table 2. Measurement Items Drawn from Research Literature and


Interviews.
Note: all items were measured using a 5-point Likert scale anchored on each end by a relevant anchor.

seeks to tap an area of performance that is difficult to determine using mea


sures of direct performance alone. Two items were found to measure perfor
mance compared to competitors [35, 40].
The third and final element of a dependent variable, as suggested by Wade
and Hulland [56], is that performance is persistent over successive periods,
not a one-time occurrence (how well is the firm doing this period compared to
previous periods?). Sustainability of performance is an important aspect of
the RBV, yet one that has traditionally been underexplored [11]. This is ex
plained in part by the fact that multiperiod data are very time consuming and

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 131

expensive to collect, particularly perceptual data. No questions were found in


the literature to capture the sustainability component of e-commerce perfor
mance. Two questions were developed based on the research interviews (de
scribed below) to measure this performance dimension.
Two additional factors related to the items are worth noting. First, while
the RBV suggests that the elements described above should form part of a
dependent variable, the theory does not advocate that each element be con
sidered separately or distinctly from the others [56]. The three elements de
scribed above are not separate constructs, but dimensions of a single
performance construct. Second, with the two exceptions of Adams, Kapashi,
Neely, and Marr and of Kumar, Stern, and Anderson [1, 33], all the measure
ment items taken from the research literature originally referred to organiza
tional performance in general rather than e-commerce performance in
particular.

Items Developed from Executive Interviews

The second source for the scale items was a set of semistructured interviews
with 52 senior managers from 42 firms. Each manager worked in the part of
the firm responsible for e-commerce operations. These executives were ex
pected to have knowledge about their firm's e-commerce performance. Zhuang
and Lederer iterated a similar assumption, and further claimed that using
data from e-commerce managers would increase the quality of the data used
in the survey [61]. About two-thirds of the interviewees were from the IS area,
while another quarter came from the marketing area. The remainder were
members of the general management team. The interviews were conducted
between April and June, 2001. Sixty-five percent of the interviews were con
ducted in person, the remainder by phone. The duration of each interview
averaged 50 minutes.
The sample was drawn from public and private sources and included large
and small firms in industrial, consumer, and service industries at different
geographical locations [20]. By obtaining a sample that reflected a diverse set
of respondent organizations, the researcher was able to obtain a rich set of
ideas and insights [42]. The goal of the sampling process was to target a wide
range of firms conducting e-commerce (different sizes, industries, geographic
regions, etc.). It was felt that by tapping a wide range of experiences, the re
search would gather a near-exhaustive set of measures from which to draw
the final validated scale. Four of the 22 items originated from the executive
interviews. Table 2 provides a full list of items with sources.

Instrument Refinement

Once measurement items had been developed, the instrument refinement


phase could begin. The 22-item instrument was put through a refinement pro
cess consisting of two parts: a review by domain experts and researchers, and
a pilot test with members of the sampling frame.

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
232 MICHAEL R. WADE AND SAGGI NEVO

First, the survey was reviewed by three domain experts (practicing manag
ers in the area of e-commerce) known to the first author. The survey was also
shown to two researchers familiar with the subject area as well as with
instrument-development methods in general. At this stage, a number of
changes were made to the wording and format of the items. In addition, it
was suggested that four of the items be rationalized into two. These were the
items on cost and budget objectives (combined to "budget"), and deadline
and efficiency objectives (combined to "deadlines"). The resulting 20-item scale
was sent to 100 randomly chosen members of the sampling frame. Thirty-one
responses were received. While a high ratio of respondents to items (5 : 1 or
higher) is recommended and is often used during scale development, a lower
ratio is often used for item reduction and scale refinement [30, 54].
An exploratory factor analysis on the 20 items in the measurement scale
resulted in a two-factor solution (see Table 3). The eigenvalue of the first factor
was 11.4, and it was able to explain 81 percent of the variance in the dataset.
The second factor was below the normal cut-off (i.e., eigenvalue > 1) and
explained 15 percent of the variance in the data. Three items were cross-loaded
on the two factors. Thus, the meaning of these items?related to employee
objectives?was shared between both factors. In order to rationalize the scale
and avoid multicollinearity, the cross-loaded items were removed. The sec
ond factor contained three items once the cross-loaded items were removed.
The substance of these items, relating to strategy, investor satisfaction, and
infrastructure, did not appear to tap a single meaning. Because the second
factor contained only three incongruent items, had a low eigenvalue, explained
only a small proportion of the variance in the data, and was not hypothesized
or justified by theory, it was dropped from the scale. Thus, a single-factor
measurement scale for e-commerce performance remained with 14 items, as
shown in Table 3. Further confirmatory tests were conducted on the instru
ment with a much larger data set, as described below.

Instrument Confirmation

Once developed, the instrument was put through a process of instrument con
firmation in which it was rigorously tested using data from a large sample
(595 cases). At this point, the instrument was subjected to comprehensive tests
of reliability and validity. Drawing on Cook and Campbell, reliability is de
fined as the extent to which a measure is free from random error components
[14]. High reliability is understood to mean that repeated employment of the
measure would obtain consistent results. In turn, validity is the extent to which
a measure reflects only the desired construct without contamination from other
systematically varying constructs. Thus, validity requires reliability as a pre
requisite. The components of validity include content and construct valid
ity?the latter, in turn, can be divided into construct and discriminant validity
[14]. Each of these components will be addressed below.
Once the items were developed and refined, the instrument was tested on
a large sample of e-commerce managers. The instrument was used as part of
a study on the impact of IS on e-commerce performance. The methodology

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 133

Measurement Item Factor 1 Factor 2

Eigenvalue 11.4 0.91


Variance explained 81% 15%
Overall performance of on-line operations 0.946
Effectiveness of strategy 0.812
Return on financial assets 0.947
Meeting budget objectives 0.909
Meeting reliability objectives 0.868
Meeting quality objectives 0.842
Meeting user satisfaction objectives 0.848
Meeting employee satisfaction objectives 0.488 0.589
Meeting investor satisfaction objectives 0.925
Meeting service objectives 0.855
Meeting major deadlines 0.890
Meeting employee skills objectives 0.412 0.684
Meeting staffing objectives 0.523 0.625
Meeting infrastructure objectives 0.701
Meeting revenue deadlines 0.866
Meeting overall objectives 0.973
Success compared to competitors 1 0.928
Success compared to competitors 2 0.934
Sustaining of performance over short term (< 1 yr) 0.903
Sustaining of performance over longer term (2-5 yrs) 0.924

Table 3. Principle Components Factor Analysis Component Matrix.


Notes: Values shown after varimax rotation. Values lower than 0.4 removed.

employed was an on-line survey that closely followed the guidelines set out
by Schaefer and Dillman and by Cook, Heath, and Thompson [13, 46]. The
suitability and efficacy of Web survey methodologies deserve some discus
sion. Despite the relative newness of the Internet and the Web, on-line survey
methodologies have been extensively researched in the social sciences [18, 26,
55]. The benefits of on-line surveys include quick turnaround for results, re
spondent accuracy equivalent to or better than other survey methodologies,
interactive capabilities, distribution flexibility, low incremental cost, and a
smaller possibility of errors through multiple data entry [15, 47].
There are two main areas of concern when using the Web as the main method
of data collection: coverage bias and nonresponse bias. Coverage bias con
cerns the fact that some members of the population of interest have no chance
of being sampled. In an on-line survey, those with Internet access will be
over sampled, while all others are ignored. For this reason, mail and phone
surveys are used more often in survey research, despite the advantages of on
line methods. In this case, it was assumed that respondents would have ac
cess to both e-mail and the Web, and that using this medium to distribute the
questionnaire would not result in excessive coverage bias. This assumption
was based on the fact that target respondents were senior managers with
e-commerce retailers in or e-commerce divisions of multichannel retailers. As

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
134 MICHAEL R. WADE AND SAGGINEVO

a precaution, respondents were given the option of receiving the survey in


strument by regular mail. Empirical studies have demonstrated that a combi
nation of Web-based and regular mail-based questionnaires did not produce
erroneous estimates [55]. However, none of the survey respondents took ad
vantage of this option.
Nonresponse bias is concerned with accounting for possible systematic (non
random) differences between respondents and nonrespondents. Since on-line
surveys typically have lower response rates than mail or telephone surveys,
nonresponse bias is a cause for concern [19, 48]. One comprehensive study
considered anything over 20 percent a good response rate for an e-mail sur
vey design [44]. Another study of response rates for e-mail surveys consid
ered a response rate of 24 percent to be normal [48], while a third regarded a
response rate of 20-30 percent as typical for Web survey designs [15]. Never
theless, results of surveys with lower response rates (11%) were also consid
ered acceptable [61]. While the response rate to the present survey fell within
the acceptable range, the potential for a bias was recognized, and in conse
quence several tests were conducted to ensure the reliability of the data (as
discussed below).
The sample for the on-line survey was drawn randomly from a population
of North American publicly traded retail firms, using a systematic sampling
technique. The sampling frame consisted of public and private financial data
bases, including EDGAR (SEC), SEDAR (Canada), COMPUSTAT, Hoover's
Masterlist, Bloomberg Directory, LEXIS-NEXIS, and Blue Book on-line. Indi
viduals identified as senior managers in the relevant areas (CEO, COO, VP
e-commerce, VP information systems, VP marketing, etc.) were contacted and
asked about on-line operations.
A key-informant approach was followed [33]. The respondents were asked
to answer questions from the firm's perspective (e.g., firm-level success, im
portance of specified resources to the firm) and not about themselves person
ally (with the exception of two demographic questions). Similar methodological
approaches have been employed by other researchers [23, 43, 50]. In order to
check for key-respondent bias, multiple responses received from a subset of
45 firms were checked for consistency and reliability using a test of inter-rater
reliability [3]. No biases were found.
E-mail messages were sent to a total of 3,316 potential respondents inviting
them to participate in the survey. Since 593 of these messages "bounced"
(17.9%) and were assumed not to have reached the intended recipients, 2,713
invitations to participate in the project were (assumed to have been) received
by potential respondents. Following one reminder message sent two weeks
after the initial e-mailing, a total of 625 surveys were returned (23% response
rate). Thirty completed surveys were found to be incomplete and were dropped
from the total response pool, leaving 595 usable responses. The final response
rate of 23 percent is in line with other on-line survey methodologies [15, 46,
47]. A test for differences between the first and last 10 percent of respondents
was conducted to test for the possibility of nonresponse bias. The test assumed
that late respondents were more similar to nonrespondents than to their early
counterparts. The results showed no significant differences among responses,
and thus provided some assurance against nonresponse bias [5].

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 135

Construct Validity

Construct validity is the extent to which a variable accurately reflects or mea


sures the construct of interest. Construct validity is normally understood to
consist of two elements: convergent and discriminant validity. Both types of
validity can be measured and tested empirically [14, 21].
Convergent validity. Convergent validity is the degree to which two or more
attempts to measure the same construct agree. To establish convergent valid
ity, it is necessary to show that measures that should be related are, in reality,
related. Table 4 shows the results of tests for convergent validity of the mea
surement scale. Item reliability, as measured by the individual loadings of each
item on the construct, is universally high. The square of the loadings is greater
than the 0.7 threshold for all items. Composite reliability, as measured by the
internal consistency coefficient (0.9841) and Cronbach's alpha (0.9824), is above
the standard benchmark level of 0.7. Finally, the average variance extracted by
the e-commerce performance construct from the measurement items is 0.8157,
well above the acceptable minimum of 0.5. Thus, these findings suggest that
the measurement scale exhibits adequate convergent validity [14].
Finally, a principal-components factor analysis was conducted on the full
dataset of 595 respondents (recall that an earlier factor analysis was conducted
using data from the 31 pilot-study respondents). The factor analysis resulted
in a single-factor solution, with an eigenvalue of 12.3, and variance explained
of 85 percent. All items had factor leadings in excess of 0.7, with most greater
than 0.9.
Discriminant validity. Convergent validity is concerned with the extent to
which measures of constructs that should be related to one another are, in
fact, related. By contrast, discriminant validity, the other element of construct
validity, is concerned with the extent to which measures of constructs that
should not be related to one another are, in fact, not related. In other words, it
tests whether the measures discriminate among one another along expected
lines [14, 51].
In order to test discriminant validity, it is necessary to introduce other con
structs into the testing process. Without additional constructs, it is not pos
sible to determine what the instrument is discriminant from. Three additional
constructs are introduced here for this purpose. They are presented solely to
support the validation of the e-commerce performance instrument. Since these
additional constructs have been individually validated in a separate process,
they are not, in and of themselves, being evaluated in this paper.
In the context of a study on e-commerce retailing, data were collected on
four constructs: three independent constructs and one dependent construct
(the e-commerce performance instrument being validated in this paper). For
purposes of clarification, the conceptual model used in the study is shown in
Figure 1. The independent constructs were: IS adaptive capabilities, IS sys
tems design capabilities, and IS management capabilities. These measures were
developed through a separate instrument-validation process and will not be
discussed further here.
A matrix of loadings and cross-loadings (see Table 5) is a robust test of dis
criminant validity recommended by Barclay, Higgins, and Thompson [6]. The

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
136 MICHAEL R. WADE AND SAGGINEVO

Residual
Square of Variance
Measurement Item Loadings Loadings (error)
Overall performance of on-line operations 0.9455 0.8940 0.1060
Return on financial assets 0.9469 0.8966 0.1034
Meeting budget objectives 0.9094 0.8270 0.1730
Meeting reliability objectives 0.8680 0.7534 0.2466
Meeting quality objectives 0.8419 0.7088 0.2913
Meeting user satisfaction objectives 0.8469 0.7172 0.2828
Meeting service objectives 0.8544 0.7230 0.2700
Meeting major deadlines 0.8905 0.7930 0.2070
Meeting revenue deadlines 0.8673 0.7522 0.2478
Meeting overall objectives 0.9725 0.9458 0.0543
Success compared to competitors 1 0.9276 0.8604 0.1395
Success compared to competitors 2 0.9332 0.8709 0.1291
Sustaining of performance over short term
(<1 yr) 0.9034 0.8161 0.1839
Sustaining of performance over longer term
(2-5 yrs) 0.9239 0.8536 0.1465
Sum 12.6314 11.4190 2.5812
Fornell's Internal consistency coefficient 0.9841
Cronbach's alpha (standardized) 0.9824
Average variance extracted (AVE) 0.8157
Square root of average variance extracted 0.9031

Table 4. Tests for Convergent Validity.

Figure I. Sample Model

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 137

Constructs/ IS Systems IS On-line


Items Management Design Adaptive Performance

IS Management
IS technical skills 0.82 0.24 0.28 0.23
IS infrastructure 0.84 0.28 0.32 0.12
IS cost effectiveness 0.81 0.41 0.21 0.27
Top management
commitment 0.73 0.28 0.15 0.14
IS management 0.69 0.04 0.31 0.32
Data security 0.84 0.16 0.23 0.18
Systems Design
IS development 0.28 0.9 0.31 0.22
Systems integration 0.18 0.85 0.17 0.15
IS Adaptive
Opportunity recognition 0.28 0.29 0.91 0.67
Market responsiveness 0.11 0.12 0.92 0.43
IS planning 0.2 0.33 0.83 0.44
IS business partnerships 0.37 0.21 0.93 0.38
External relations 0.22 0.31 0.93 0.38
Collaboration support 0.19 0.08 0.89 0.52
Internal intelligence 0.37 0.17 0.93 0.41
E-commerce Success
Meet deadlines 0.39 0.03 0.48 0.89
Meet budget objectives 0.23 0.12 0.46 0.91
Meet overall objectives 0.270.14 0.51 0.97
Meet quality objectives 0.31 0.09 0.51 0.84
Meet reliability objectives 0.32 0.11 0.45 0.87
Meet revenue objectives 0.4 0.07 0.57 0.87
Meet service objectives 0.25 0.05 0.47 0.86
Meet user satisfaction
objectives 0.36 0.1 0.41 0.85
Overall performance 0.25 0.24 0.49 0.95
Overall ROI 0.34 0.21 0.38 0.95
Comparison 1 0.29 0.12 0.51 0.93
Comparison 2 0.26 0.12 0.53 0.93
Sustainability 1 0.3 0.25 0.57 0.92
Sustainability 2 0.3 0.24 0.58 0.9

Table 5. Matrix of Loadings and Cross-Loadings.


Note: All correlations significant at the 0.01 level.

matrix shows how each of the measures loads on the entire model constructs
by correlating the construct factor scores with actual data values on a case-by
case basis. To indicate good discriminant validity, items should load highly
on the intended construct but at the same time not load highly on other con
structs [6].
Table 5 shows that the e-commerce performance items load highly on the
intended construct but do not load as highly on the other constructs. In addi
tion, items from the other constructs in the model do not load as highly on the
e-commerce performance construct as the intended items. This finding sug
gests that the scale exhibits adequate discriminant validity. Together, the con
vergent and discriminant validities confirm that the instrument exhibits
satisfactory construct validity [14].

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
138 MICHAEL R. WADE AND SAGGI NEVO

Summary of Findings

Table 6 shows descriptive statistics for each of the item measures taken from
the full dataset. The mean value for each of the items was close to 3 (average
of 3.12 on a 5-point Likert scale), while the median value for each item was 3.
Thus, for benchmarking purposes, values above 3 indicate above-average
evaluations, while those below 3 indicate negative evaluations for all items in
the instrument. The average standard deviation of the items was 1.1, with no
single item having a standard deviation greater than 1.3 or less than 1. Mea
surements of skewness indicated that responses were normally distributed
around the mean (all skewness measures were between -0.3 and 0.1).
In summary, measurement items were drawn from the relevant research
literature and augmented by interviews with 52 e-commerce managers. This
process resulted in a 22-item scale. The scale was then reduced to 14 items
through a review by domain experts and a process of exploratory factor analysis
on pilot data. Once developed, the scale was tested using survey data from
595 e-commerce managers. Results of various statistical tests of convergent
and discriminant validity suggested that the measurement was reliable and
valid, and exhibited strong psychometric properties. The exact wording of
the final items is shown in the Appendix.

Future Research and Limitations

The perceptual measure presented in this paper facilitates the administration


of the instrument at the organization level and the assessment of various e
commerce performance dimensions. As yet, however, the measure has only
been tested in one context, in one time period, and with one data set. Re
searchers are encouraged to utilize the instrument with new datasets in other
contexts. For example, the measure provides a general indication of perfor
mance for e-commerce retailers. Future research could expand the scope to
include data related to industry, geography, functional area, and so on, in or
der for the instrument to be useful for individual purposes.
To enhance the external validity of the proposed instrument, future studies
may use this perceptual e-commerce performance measure in tandem with
other measures and attempt to triangulate results. Companion measures uti
lizing objective measures could be developed to complement the instrument
and form a MTMM matrix [10]. While difficult to obtain in practice, better
objective measures of e-commerce performance are clearly required.
This 14-item measure may be further refined into a more parsimonious
measure of e-commerce performance at the firm level. In particular, steps might
be taken to consolidate some of the items comprising the e-commerce perfor
mance compared to objectives dimension of the construct. A smaller number of
items may enhance the usability of the instrument. Great care must be taken,
however, not to damage the content validity of the proposed instrument.
While the items in the measurement scale are worded in such a way as to
target firms engaged in e-commerce, the instrument may also be validated in
other contexts. For example, it may be worthwhile to test whether the reliability

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
Skewness

Max Skewness SE 1.223 1 5 0.029 0.104 1.235 1 5 0.135 0.104 1.279 1 5 0.091 1.163
0.104 1 5 0.007 0.104 1.275 1 5 0.072 0.104

1.245 1 5 -0.052 0.104 1.228 1 5 -0.02 1.211


0.104 1 5 -0.2881.171
0.1051 5 -0.1881.205
0.1041 5 -0.2111.326
0.1041 5 -0.084 0.104 1.181 1 5 -0.1271.201
0.1041 5 -0.128 0.104 1.214 1 5 -0.0341.105
0.1041 5 -0.046 0.104

Min

SD

M MSE Mdn

ALL 3.12 0.047 3


Budget 2.61 0.053 3 Service 3.37 0.051 3 Revenue 2.77 0.055 3

Table 6. Scale Item Descriptive Statistics


Overall 1 3.3 0.052 3 2 2.89 0.053 3
Overall Quality 3.4 0.052 3 Deadlines 3.05 0.057 3

Competitors 2 3.18Sustaining
0.051 31 2.96 0.054 3 Sustaining 2 3.16 0.052 3
Competitors 1 3.3 0.05 3
Objectives 3.09 0.05 3
Reliability 3.19 0.052 3 Satisfaction 3.42 0.05 3

Scale Items Note: N = 595

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
140 MICHAEL R. WADE AND SAGGI NEVO

and validity of the instrument hold in specific contexts, such as for the evalua
tion of ERP or CRM systems, or for general organizational performance.

Discussion and Conclusions

The primary aim of this study was to develop and validate a perceptual mea
surement instrument of e-commerce performance at the level of the firm. A
multi-phase process of measurement development, refinement, and confir
mation was used in the development of a 14-item scale. The measure is con
ceptually grounded in the resource-based view (RBV) of the firm and may be
used by practitioners and academics alike to evaluate e-commerce perfor
mance. This paper represents an initial step toward being able to reliably
measure the performance of firms engaged in e-commerce, whether on a full
time basis or as part of a multichannel strategy. Clearly, more work is neces
sary to establish the generalizability of the instrument.

Contribution to Research

Unlike many e-commerce measurement efforts in the past, this study pro
vides an instrument with strong ties to theory. The study largely supports,
and is supported by, the resource-based view of the firm. Although they do
not exist as separate and distinct factors in the model, RBV-inspired elements,
such as absolute (direct) performance, performance compared to objectives,
comparative (i.e., relative to competition) performance, and sustained perfor
mance, are all elements of the instrument. The RBV is suitable for the analysis
of e-commerce performance because the tools and processes used to develop
e-commerce infrastructure and operations are available to all firms, yet some
manage to transform them into unique capabilities that are hard to imitate
and thus form the source of a sustainable competitive advantage.
The proposed instrument is the result of an attempt to fill a gap in the lit
erature by providing a perceptual measure of e-commerce performance at the
level of the firm. Despite some limitations discussed below, the steps taken to
develop, refine, and confirm the instrument have resulted in a valid and reli
able measure of firm-level e-commerce performance. Researchers can use the
instrument presented in this paper to measure the performance of e-commerce
operations in conjunction with objective measures (if available), or the instru
ment can be used by itself. One might add that the results reported in this
paper complement the results reported by Zhu, Kraemer, Xu, and Dedrick
[60]. The present study and their study report similar results in support of the
use of perceptual firm-level measures but rely on different theories (technology
organization-environment vs. resource-based view) and focus on different
industries (financial services vs. retail).
The instrument has been designed for use by researchers wishing to use a
survey data-collection methodology. The full 14-item measure provides an
indication of overall e-commerce performance. Many conceptual models seek

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 141

to explain and predict the performance of e-commerce operations. The in


strument presented in this paper can be used to represent the dependent
construct in these models. While this research draws heavily on the resource
based view of the firm, there is no reason why the instrument cannot be
used by researchers employing other theories (or, indeed, by practicing man
agers or others using no theory at all). Researchers can experiment with in
dependent constructs to better understand the antecedents to successful
e-commerce.

Researchers can also use the instrument to pinpoint specific areas of


strength and weakness within an organization's e-commerce strategy. Focu
can then be placed on those specific factors. Used in this way, the instrumen
can act as a filter to determine the most fruitful topics for continuing research
and analysis, perhaps using more intensive methodologies like interviews o
case studies.

Contribution to Practice

A key contribution of the paper is that it provides a validated instrument th


is robust and relatively easy to administer. A firm may test its own e-commerce
performance using the instrument by administering the measurement item
to its own managers. The instrument is able to provide more than an indi
tion of overall e-commerce performance. It is also able to point a firm's ma
agement team to sources of potential weakness. If, for example, a firm score
low on meeting user-satisfaction objectives, then management knows that on
way to perform better in its e-commerce operations is to constructively a
dress that dimension. The list of 14 items allows a firm to identify its sourc
of strength and weakness and to target the elements that require the mos
attention.
This work also contributes by enabling practitioners to measure firm-lev
e-commerce performance in situations where direct, objective financial m
sures are not readily available, or where it is important to measure nonm
etary elements of e-commerce success. The instrument allows for the inclus
of nonfinancial benefits, such as customer satisfaction and competitivenes
thus expanding the dimensions along which e-commerce performance can b
measured. Ideally, firms could employ this measure in conjunction with oth
measures, perhaps as part of a performance scorecard.
The instrument can further be useful when applied longitudinally. Firm
can use the scale to evaluate performance at consecutive times. Scores from
past periods can then be compared to scores from the current period in o
der to build a performance trend-line. By administering the instrument ov
successive periods, a firm can determine whether it is moving ahead towa
the establishment of strong e-commerce capabilities, or whether it is fallin
behind. Finally, the instrument is not limited to "pure play" firms, for it
be used by multichannel firms to assess the performance of their on-line
components.
In sum, the measurement scale presented in this paper is a reliable and
valid perceptual indication of an organization's e-commerce performance.

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
142 MICHAEL R. WADE AND SAGGINEVO

NOTES

1. For the purposes of this paper, e-commerce is taken to mean transactional


commerce occurring over the Internet.
2. In a comprehensive evaluation of e-business measurement techniques, Kleist
also notes a lack of quantitative and qualitative success-oriented dependent
variables [29].

REFERENCES

1. Adams, C; Kapashi, N.; Neely, A.; and Marr, B. Managing the measures:
Measuring eBusiness Performance. White paper, Accenture Consulting/
Cranfield University, 2001.
2. Agarwal, R., and Venkatesh, V. Assessing a firm's Web presence: A
heuristic evaluation procedure for the measurement of usability Information
Systems Research, 13, 2 (June 2002), 168-186.
3. Agresti, A. Modelling patterns of agreement and disagreement. Statisti
cal Methods in Medical Research, 1 (1992), 201-218.
4. Amit, R., and Schoemaker, P.J.H. Strategic assets and organizational rent.
Strategic Management Journal, 14, 1 (1993), 33-46.
5. Armstrong, J.S., and Overton, T.S. Estimating nonresponse bias in mail
surveys. Journal of Marketing Research, 14, 3 (1977), 396^02.
6. Barclay, D.; Higgins, C; and Thompson, R. The partial least squares
(PLS) approach to causal modeling: Personal computer adoption and use as
an illustration. Technology Studies, 2, 2 (1995), 285-309.
7. Barney, J. Firm resources and sustained competitive advantage. Journal of
Management, 17, 1 (1991), 99-120.
8. Barua, A.; Kriebel H.C.; and Mukhopadhyay, T. Information technology
and business value: An analytic and empirical investigation. Information
Systems Research, 6,1 (1995), 3-23.
9. Bharadwaj, A.S.; Bharadwaj, S.G.; and Konsynski, B.R. Information
technology effects on firm performance as measured by Tobin's Q. Manage
ment Science, 45, 7 (1999), 1008-1024.
10. Campbell TD., and Fiske, W.D. Convergent and discriminant validation
by the multitrait-multimethod matrix. Psychological Bulletin, 56 (1959), 81-105.
11. Capron, L., and Hulland, J. Redeployment of brands, sales forces, and
general marketing management expertise following horizontal acquisitions:
A resource-based view. Journal of Marketing, 63 (April 1999), 41-54.
12. Chin, W.; Gopal, A.; and Salisbury, W.D. Advancing the theory of
adaptive structuration: The development of a scale to measure faithfulness
of appropriation. Information Systems Research, 8, 4 (1997), 342-367.
13. Cook, C; Heath, F.; and Thompson, R. A meta-analysis of response rates
in Web- or Internet-based surveys. Educational and Psychological Measure
ment, 60, 6 (2000), 821-836.
14. Cook, T.D., and Campbell, D.T. Quasi-Experimentation: Design and
Analysis Issues for Field Settings. Chicago: Rand-McNally, 1979.
15. Couper, M. Web surveys: A review of issues and approaches. Public
Opinion Quarterly, 64 (2000), 464-494.

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 143

16. Dehning, B.; Richardson, V.J.; Urbaczewski, A.; and Wells, J.D. Reexam
ining the value relevance of e-commerce initiatives. Journal of Management
Information Systems, 21,1 (summer 2004), 55-82.
17. Dewan, R.; Jing, B.; and Seidmann, A. Adoption of Internet-based
product customization and pricing strategies. Journal of Management Informa
tion Systems, 17, 2 (2000), 9-28.
18. Dillman, D.A., and Bowker, D.K. The Web questionnaire challenge to
survey methodologists. In U.-D. Reips and M. Bosnjak (eds.), Dimensions of
Internet Science. Lengerich: Pabst Science, 2001, pp. 159-178.
19. Dommeyer, C, and Moriarty, E. Comparing two forms of an e-mail
survey: Embedded vs. attached. International Journal of Market Research, 42,1
(2000), 39-50.
20. Glaser, B., and Strauss, A. The Discovery of Grounded Theory. Chicago:
Aldine, 1967.
21. Hair, J.E; Anderson, R.E.; Tatham, R.L.; and Black, W.C. Multivariate Data
Analysis. Upper Saddle River, NJ: Prentice-Hall, 1998.
22. Hoffman, D., and Novak, T. Marketing in hypermedia computer
mediated environments: Conceptual foundations. Journal of Marketing, 60
(1996), 50-68.
23. Jaworski, B.J., and Kohli, A.K. Market orientation: Antecedents and
consequences. Journal of Marketing, 57 (July 1993), 53-70.
24. Kaufman, R.J., and Waiden, E.A. Economics and electronic commerce:
Survey and directions for research. International Journal of Electronic Com
merce, 5, 4 (summer 2001), 5-116.
25. Kay, J. The international transferability of the firm's advantage. Business
Strategy Review, 4 (1993), 17-37.
26. Kaye, B.K., and Johnson, T.J. Research methodology: Taming the cyber
frontier?Techniques for improving online surveys. Social Science Computer
Review, 17, 3 (1999), 323-337.
27. Keeney, R.L. The value of Internet commerce to the customer. Manage
ment Science, 45, 4 (1999), 533-542.
28. Kim, J.; Lee, J.; Han, K.; and Lee, M. Business as buildings: Metrics for
architectural quality of Internet business. Information Systems Research, 13, 2
(June 2002), 205-223.
29. Kleist, VF. An approach to evaluating e-business information systems
projects. Information Systems Prontiers, 5, 3 (2003), 249-263.
30. Ko, D.; Kirsch, L.; and King, W. Antecedents of knowledge transfer from
consultants to clients in enterprise system implementations. MIS Quarterly,
29, 1 (2005), 59-85.
31. Kohli, R.; Devaraj, S.; and Mahmood, M.A. Understanding determinants
of online consumer satisfaction: A decision process perspective. Journal of
Management Information Systems, 21,1 (summer 2004), 115-135.
32. Koufaris, M. Applying the technology acceptance model and flow
theory to online consumer behavior. Information Systems Research, 13, 3
(September 2002), 239-254.
33. Kumar N.; Stern, L.W.; and Anderson, J.W. Conducting inter
organizational research using key informants. Academy of Management
Journal, 36 (1993), 1633-1651.

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
244 MICHAEL R. WADE AND SAGGI NEVO

34. Lay, P. Beware of the cost benefit model for IS project evaluation. Journal
of Systems Management, 36, 6 (1985), 30-35.
35. Lederer, A.; Dinesh, A.M.; and Sims, K. The link between information
strategy and electronic commerce. Journal of Organizational Computing and
Electronic Commerce, 7, 1 (1997), 17-34.
36. Lederer, A.L.; Mirchandani, D.A.; and Sims, K. The search for strategic
advantage from the World Wide Web. International Journal of Electronic
Commerce, 5, 4 (summer 2001), 117-133.
37. Mata, F.G.; Fuerst, W.L.; and Barney, J.B. Information technology and
sustained competitive advantage: A resource-based analysis. MIS Quarterly,
19, 4 (December 1995), 487-505.
38. McGrath, R.G.; MacMillan, I.C.; and Venkataraman, S. Defining and
developing competence: A strategic process paradigm. Strategic Management
Journal, 16 (1995), 251-275.
39. Miller, D., and Shamsie, J. The resource-based view of the firm in two
environments: The Hollywood firm studios from 1936-1965. Academy of
Management Journal, 39, 3 (1996), 519-543.
40. Orli, R., and Tom, J. If it's worth more than it costs, buy it! Journal of
Information Systems Management, 4, 3 (1987), 85-89.
41. Palmer, J.W. Web site usability, design, and performance metrics. Infor
mation Systems Research, 13, 2 (June 2002), 151-167.
42. Parkhe, A. "Messy" research, methodological predispositions, and
theory Academy of Management Review, 18, 2 (1993), 227-268.
43. Powell, T.C., and Dent-Micallef, A. Information technology as competi
tive advantage: The role of human, business, and technology resources.
Strategic Management Journal, 18,5 (1997), 375-405.
44. Prahalad, C., and Hamel, D. The core competence of the corporation.
Harvard Business Review, 68, 3 (1990), 79-91.
45. Quinn, J.B., and Baily, M.N. Information technology: Increasing produc
tivity in services. Academy of Management Executive, 8, 3 (1994), 28-51.
46. Schaefer, D.R., and Dillman, D.A. Development of a standard e-mail
methodology: Results of an experiment. Public Opinion Quarterly, 62, 3
(1998), 378-393.
47. Schuldt, B., and Totten, J. Electronic mail vs. mail survey response rates.
Marketing Research, 6 (winter 1994), 36-39.
48. Sheehan, K.B., and McMillan, S.J. Response variation in e-mail surveys:
An exploration. Journal of Advertising Research, 39, 4 (1999), 45-54.
49. Simon, H.A. The New Science of Management Decision. Englewood Cliffs,
NJ: Prentice Hall, 1977.
50. Steinheid, C; Bouwman, H.; and Adelaar T. The dynamics of click-and
mortar electronic commerce: Opportunities and management strategies.
International Journal of Electronic Commerce, 7,1 (fall 2002), 93-119.
51. Straub, D. Validating instruments in MIS research. MIS Quarterly, 13,2
(1989), 147-168.
52. Straub, D.; Rai, A.; and Klein, R. Measuring firm performance at the
network level: A nomology of the business impact of digital supply net
works. Journal of Management Information Systems, 21,1 (summer 2004),
83-114.

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
INTERNATIONAL JOURNAL OF ELECTRONIC COMMERCE 145

53. Torkzadeh, G., and Dhillon, G. Measuring factors that influence the
success of Internet commerce. Information Systems Research, 13,2 (June 2002),
187-204.
54. Towler, A., and Dipboye, R. Development of a learning style orientation
measure. Organizational Research Methods, 6 (2003), 216-235.
55. Vehovar, V.; Manfreda, K.L.; and Batagelj, Z. Sensitivity of electronic
commerce to the survey instrument. International Journal of Electronic Com
merce, 6, 1 (fall 2001), 31-51.
56. Wade, M., and Hulland, J. Review: The resource-based view and infor
mation systems research: Review, extension, and suggestions for future
research. MIS Quarterly, 28,1 (March 2004), 107-142.
57. Wheeler, B.C. NEBIC: A dynamic capabilities theory for assessing Net
enablement. Information Systems Research, 13, 2 (June 2002), 125-146.
58. Zahra, S., and George, G. Wheeler's Net-enabled business innovation
cycle and a strategic entrepreneurship perspective on the evolution of
dynamic capabilities. Information Systems Research, 13, 2 (June 2002),
147-150.
59. Zhu, K., and Kraemer, K.L. E-commerce metrics for Net-enhanced
organizations: Assessing the value of e-commerce to firm performance in
the manufacturing sector. Information Systems Research, 13,3 (September
2002), 275-295.
60. Zhu, K.; Kraemer, K.L.; Xu, S.; and Dedrick, J. Information technology
payoff in e-business environments: An international perspective on value
creation of e-business in the financial services industry. Journal of Manage
ment Information Systems, 21,1 (summer 2004), 17-54.
61. Zhuang, Y., and Lederer, A.L. An instrument for measuring the business
benefits of e-commerce retailing. International Journal of Electronic Commerce,
7, 3 (spring 2003), 65-99.
62. Zmud, R., and Boynton, A. Survey measures and instruments in MIS:
Inventory and appraisals. Harvard Business School Research Colloquium
(1991), 149-185.

MICHAEL R. WADE (mwade@schulich.yorku.ca) is an assistant professor of opera


tions management and information systems at the Schulich School of Business, York
University, Toronto. His research has appeared in Journal of Management Information
Systems, MIS Quarterly, Internet Research, and Information and Management. He is co
author of one MIS and two e-commerce textbooks. His current research focuses on the
strategic use of information systems for sustainable competitive advantage.

SAGGI NEVO (snevo@schulich.yorku.ca) is a Ph.D. candidate in the operations man


agement and information systems area of the Schulich School of Business at York
University, Toronto. He is a certified systems engineer (MCSE) and holds a bachelor's
degree in economics from, the University of Haifa and a master's degree in economics
from Northwestern University. Prior to starting his Ph.D., Mr. Nevo taught manage
ment of IS and advanced business programming at the University of British Colum
bia. His work experience includes software development and IT project management.
He also spent several years in high-tech start-ups working as software development
manager and VP of engineering.

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms
good n/a better n/a better n/a well n/a
very much much very

not much
worse
much
worse
notwell
good

How would you rate the overall performance of your on-line operations How do you think your customers would rate your on-line operations

To what extent have you been able to sustain the performance of your To what extent have you been able to sustain the performance of your

How would you evaluate the return on investment for your on-line

Compared with your main competitors, how would you rate the
Assess the performance of your on-line operations compared to

expectations on each of the following dimensions:


on-line operations over the longer term (2-5 years)?

compared to those of your main competitors?


23. On-line performance compared to main competitors
on-line operations over the short term (< 1 year)?

performance of your on-line operations?

22. On-line performance compared to objectives

meeting user satisfaction objectives?

meeting reliability objectives?


meeting revenue objectives?

meeting budget objectives?


Exact Wording of Items 21. Overall on-line performance

meeting meeting
meeting service
meeting
quality overall
major objectives?
objectives?
deadlines?
objectives?
24. On-line performance over time

operations?

Appendix

This content downloaded from 120.188.93.85 on Sun, 11 Mar 2018 01:03:23 UTC
All use subject to http://about.jstor.org/terms

S-ar putea să vă placă și