Sunteți pe pagina 1din 18

D0em0"

ELSEVIER Decision Support Systems 19 (1997) 75-92

Microcomputer software evaluation: An econometric model


Evan E. Anderson a,., Y u - M i n Chen t,
a Graduate Business Institute, George Mason University, Enterprise Hall. Mail Stop: 5F5, Fairfax, VA 22030-4444, USA
b J.C. Penny Company, Inc., Plano, TX, USA

Received March 1993;revisedJanuary 1995;acceptedJuly 1995

Abstract

Microcomputer software selection is made difficult by the multiplicity of products, variation in product performance, and
the uncertainties of user needs. This paper presents a methodology for the empirical evaluation of competing software
packages. The process proposed identifies the most relevant performance attribute set and, through a simultaneous system of
equations, the relative importance of each attribute in explaining the satisfaction of users. The methodology is illustrated
using sample data derived from user evaluations of five different software types: word processing, spreadsheet, data base
management systems, communications, and graphics. The applicability of the methodology and the implications of the
findings are discussed.

Keywords: Softwareevaluation and selection; User evaluations; Econometricmodeling

1. Introduction text and data [41]. However, these users had neither
the skills nor the time to write computer code in
Prior to the commercial development of the mi- procedural languages. Hence, the rate of acceptance
crocomputer, many information needs were unmet or for microcomputers was closely tied to the commer-
only partially satisfied through mainframe systems. cial development of end user software [40].
Organizations soon recognized the advantages of End user demands for noncomplex, easy ways to
microcomputers and the opportunities that they rep- instruct the microcomputer resulted in the develop-
resented for the improvement of worker productivity ment of several different types of software: word
[25,59]. In particular, they could be acquired and processing, database management, spreadsheet, and
operated at a low per unit cost, required little space others. Hundreds of competing software products
or support, were flexible in their applications, easy to emerged with different performance attributes and
maintain, and allowed their users a high degree of capabilities. Unfortunately, the growth in the number
autonomy and control over their computing. Their and variety of products, with different performance
greatest productivity potential was at the lower man- attributes, created an uncertain evaluation and selec-
agerial-clerical levels of organizations, where there tion environment for potential buyers [62]. These
were substantial needs to store, process, and report uncertainties were further complicated by the lack of
understanding that users frequently had about their
needs, and because needs changed with user experi-
* Corresponding author. ence and technological advancements [9].

0167-9236/97/$17.00 © 1997 Elsevier Science B.V. All rights reserved.


PII S0167-9236(96)00042-5
76 E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75- 92

Similar difficulties of evaluation and selection tions and graphics. It employs data from a sample of
exist in computer hardware, where numerous ap- users. The authors will present the key issues, prob-
proaches to attribute evaluation and choice of tech- lems and choices that are involved with each step of
nology have been proposed (e.g., [2,49,50]). These the methodology, and illustrate the steps through the
methods, however, like those that exist for software application.
evaluation, tend to consist of very general criteria, Wesselius and Ververs argue that software quality
such as good quality and reliability, as well as should be defined in "terms of a set of product
specific approaches to testing [23,44]. They are based characteristics necessary for the product to be suited
on the tests of individual buyers evaluating compet- for specified use" [61]. This definition conforms
ing products one by one, and their insights into the with DOD and ISO standards [18,32]. Eriksson and
implications of these tests. T~Sm have suggested that the evaluation of software
packages should focus on three questions ([22], p.
157):
2. Software evaluation: End user computing • How well (easily, reliably, efficiently) can I use it
as is?
The diffusion of microcomputers and applications • How easy is it to maintain (understand, modify,
software has converted many information users to and retest)?
end users of computers. These end users develop, • Can I still use it if I change my environment?
maintain and use their own information systems. In this paper, we define user satisfaction (utility)
Despite their greater involvement in the design, de- as a function of performance attributes, an approach
velopment and use of these systems, end user com- that is well accepted in economic models and infor-
puting (EUC) occurs with varying degrees of success mation systems research [33,35,46]. Considerable at-
[1,4]. Some of the same issues arising in IS evalua- tention has been devoted to the formulation of user
tion related to the individual, their work, the organi- satisfaction [7,16,36,47,51 ], construct creation and
zation and the nature of applications arises in EUC testing [33,39,45,48,55], and to its measurement
as well [3,11,15,30]. Additionally, end users have a [20,58]. Theoretically, if we are given a budget for
vested interest in the configuration of their work software, potential users would like to be able to
station, including the quality and performance of assess the probable contributions to satisfaction of
software. each product and choose the one that would maxi-
This paper presents a methodology derived from mize their benefits. Similarly, vendors would like to
multivariate statistics and econometrics for an em- use the same information to better understand the
pirically based assessment of software. It is not needs and experiences of users, so that they can
intended to be an empirical test of a theoretical improve the design a n d / o r support provided for
model of software performance or usefulness. It their software.
accepts the multiplicity of user environments, soft- In the application of the methodology presented
ware products, computer technologies and applica- here, user satisfaction is modeled as a function of the
tions, and does not impose a value structure on following software attributes: basic functions, ad-
software attributes. It focuses on the process of vanced functions, training time, documentation, ease
empirical assessment, and seeks to identify and orga- of use and vendor support. Each of these is discussed
nize many of the measurement, modeling and estima- below.
tion issues involved. The underlying capabilities of software packages
More specifically, the authors develop a step by are contained in their functions [26]. Functions de-
step approach for considering the issues involved in fine the domain of application and are at the core of
performance assessments, data quality and require- product differentiation. Datapro Research (data
ments, model selection and estimation and interpreta- source for this study), for example, defines the basic
tion. It illustrates the methodology for five different functions of word processing software as: user inter-
software types: word processing, spreadsheet, face, text entry, editing functions, formatting func-
database management systems (DBMS), communica- tions and print functions [60]. Additional functions
E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92 77

are included in advanced functions, such as keystroke people to use should be easy to learn (and remem-
storage and file conversions. Eriksson and T~Srn note ber) . . . . and be easy and pleasant to use" ([27], p.
that software characteristics are not binary and per- 300). Actual, as well as perceived, ease of use play
form in varying degrees [22]. They suggest that an important role in the adoption and utilization of
characteristics should be considered primary and sec- commercial software packages [53,56]. User satisfac-
ondary depending on the experiences and needs of tion derives from utilization and accomplishment. If
the user [22]. A similar continuum is proposed by the time and hassle of developing usable applications
Rainer and Harrison, who created an EUC activity exceeds the user's threshold, dissatisfaction results.
scale that is based in part on the software functions EUC is intended to be productivity enhancing. Ac-
used [45]. cepting the diverse skills/knowledge/experience of
One would expect that less skilled, infrequent users, the faster users accomplish their application,
users, with simple applications would be content the greater will be their satisfaction [34].
with basic functions [1,16]. As users gain knowledge The final variable included in the data set of this
and skill with software packages, they extend the paper is vendor support. Support frequently comes
sophistication and range of applications and rely from many different sources, including organiza-
increasingly on advanced functions [30]. tional information centers and support groups
Prior to the emergence of EUC, training was one [4,24,42]. Nevertheless, vendors have a strong eco-
of the few variables in empirical studies that consis- nomic incentive to facilitate skill development and
tently contributed to user satisfaction [25,43]. These problem solving for users. They establish newslet-
earlier findings have been affirmed in EUC environ- ters, hot lines, informal user groups, and sponsor
ments [4,14]. The effectiveness of training is influ- activities for users. Vendor support, broadly defined,
enced by the format and schedule of training, the includes product innovation and timely upgrades of
medium employed, the ability and preparation of the software product. The more vendors assist users
users, the organizational environment and manage- to realize their potential with software products, the
ment support for training [10,28,54]. Training has more satisfied they are with them.
multiple effects on users [17]. It can be used to
motivate users, remove uncertainties and allay fears, 2.1. Methodology
gain acceptance for technology, and to demonstrate
the opportunities for job growth and achievement The methodology presented in this paper has two
through its application. Additionally, it contributes major objectives. First, it seeks to identify a set of
directly to the advancement of knowledge and under- performance attributes that provides the best concep-
standing. Training, to be most effective, must be tualization of software selection by users. It is rare
managed by organizations to insure the timeliness of that this would require data on all possible attributes.
content with user needs and development. The question is, " C a n the essential information about
The contribution of documentation to user satis- the contribution of performance attributes to satisfac-
faction is too frequently taken for granted. It is tion be extracted from a subset of attributes?" Sec-
written to motivate, explain, clarify and teach users ond, using that set of attributes, the authors estimate
about software capabilities [19,21]. The more effec- weights for the attributes that would establish their
tive it is as a tool of communications, the more relative importance in the evaluation process. That
quickly users will gain skill in the software and gain is, " W h a t is the relative magnitude of the marginal
from its applications. Torkzadeh has stated that contribution to satisfaction of each performance at-
" . . . t h i s effectiveness [of documentation] is more tribute?"
critical in an end-user computing environment where The accomplishment of these objectives is made
users become more dependent on documentation and difficult by the number of features and complexity of
less dependent on interaction with analysts and pro- software, the diversity of user skills, experience and
grammers" [57]. expectations, the variety of applications, and by the
The case for systems usability is made succinctly fact that software satisfaction may be a "joint prod-
by Gould and Lewis, "Any system designed for uct" [8]. Hence, a sound methodology must be able
78 E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92

to partition the data, that is, reduce its redundancy, dor support, ease of'use, and training time, plus
and measure or control the interaction between soft- overall satisfaction. Appendix A presents the format
ware types [38]. This paper has used principal com- of the questionnaire. The number of products evalu-
ponents analysis to identify the performance attribute ated by users within each software type ranged from
set, and a simultaneous system of equations, with a minimum of thirteen for spreadsheet and communi-
embedded recursive equations for each software type, cations software, respectively, to twenty-three for
to determine the relative importance of attributes to word processing. The sample sizes ranged from
users. nearly three hundred users for graphics products to
over seven hundred for word processing software.
Table 1 presents a summary of the means and
3. Data: Sources and organization standard deviations for the various attributes studied.
The authors have treated the overall satisfaction vari-
Data for software evaluation can basically be
able as endogenous, with the performance attributes
obtained from three sources: experimental studies,
as explanatory variables. The data are cross-sectional
user surveys and expert opinion. The methods pre-
and aggregated from user responses, i.e., consist of
sented in this paper generally do not pertain to
the average evaluations for each product within the
assessments by experts, since there rarely is enough
various software types, rather than individual re-
data to support statistical modeling [5,49]. Our pri-
sponses. It can be seen from Table I that the highest
mary focus in this paper is with the modeling of data
mean performance rating (8.00) was achieved by
derived from user surveys. However, it should be
DBMS for basic functions ~while the lowest (6.67)
noted that this methodology, provided there are suffi-
was received by communications software for its
cient observations, is equally applicable to experi-
vendor support. The implications of data aggregation
mentally generated data. In particular, experiments
for causal models will be discussed later.
would generally be created around tasks. Software
attributes would be replaced by task characteristics
and user satisfaction would be replaced by some
4. Data: Redundancy and partitioning
measure of efficiency, time to completion, correct-
ness, or other measures of performance.
In the context of software evaluation, one may
It is important to recognize that user surveys may
expect that attribute assessments will be correlated,
be reported in an aggregate form to protect the
whether the data are aggregated or not. That is, the
identity of respondents or to simplify reporting. In-
data exhibit a mutual interdependence between some
deed, the data used in our application are based on
of the performance attributes [8]. In some cases, the
the average attribute evaluations of the users of each
correlation pattern is the same for all software types,
software package. Since the number of respondents
and in others it is quite different. For example, the
(sample size) evaluating each package may vary, the
data of this study consistently exhibits a high correla-
quality of mean attribute scores may vary. As is
tion between 'ease of use' and 'training'. It is rea-
shown below, this has important implications for
sonable to expect that training will enhance a user's
parameter estimation.
knowledge and appreciation of a product, making it
3.1. Application easier to use, and a product that is easier to use will
increase the benefits of training.
The data employed in this study were collected, The presence of high correlations between vari-
compiled, and reported by Datapro [60]. Several ables suggests that they may derive from the same
hundred users of various software products were source(s). Highly correlated variables quite likely do
asked to evaluate the product(s) they own. Users not represent separate factors, but are imperfect rep-
evaluated each attribute according to their satisfac- resentations of one another or something else. Fur-
tion using scaled responses ranging between 1 (poor) thermore, regression coefficients estimated by least
and 10 (excellent) for the following attributes: basic squares methods in the presence of highly correlated
functions, documentation, advanced functions, yen- variables may be difficult to interpret. Assuming that
Table 1 z
Summary of user evaluations

Software Number of Number of Explanatory variables Endogenous


types a products respondents variable

Basic Documentation Advanced Vendor Ease of Training Overall


functions fun(:tions support use time satisfaction
Mean S.D. b Mean S.D. Mean S.D. Mean S.D. Mean S.D. Mean S.D. Mean S.D.

Word Processing 23 740 7.72 0.70 7.20 1.07 7.23 1.06 7.05 1.25 7.52 0.95 7.07 1.59 7.65 0.80
Spreadsheet 13 687 7.44 0.93 7.05 1.21 6.99 1.39 7.41 1.28 6.98 1.17 7.81 1.45 7.53 1.09
DBM 18 505 8.00 1.00 7.44 1.07 7.67 1.22 7.57 1.24 7.57 1.33 7.26 1.40 7.84 1.03
Communications 13 437 7.58 1.26 6.94 1.36 7.16 1.51 6.67 1.49 7.34 1.31 7.52 1.22 7.29 1.33
Gr~iphics 16 279 7.97 1.25 7.14 1.50 7.32 1.74 6.97 2.17 7.29 1.47 7.31 1.55 7.42 1.24

a Source: Microcomputer Software, Datapro Research Corporation, New Jersey, March 1986.
b S.D. = Standard deviation.

"--4
80 E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (I 997) 75-92

the other assumptions of ordinary least squares are software types, except word processing, where four
satisfied, the parameter estimates of single equation components are required to achieve a cumulative
models will be unbiased in the presence of multi- percent of variance equal to or greater than 0.95. So
collinearity, but standard errors for the regression as to gain some interpretive insight into the mapping
parameter(s) will be inflated [37]. Large standard of performance attributes to components, a varimax
errors interfere with the researchers ability to deter- rotation of the factor-loading matrix for each soft-
mine the statistical significance of variables and, ware type was performed. The varimax rotation is
hence, to establish their importance in explaining the orthogonal and involves a redistribution of the vari-
variation observed in the endogenous variable. ance accounted for by the group of components
Before forming explicit estimating equations, ana- identified under principal components, without any
lyst should attempt to identify the extent of redun- loss of total variance explained. A study of the factor
dancy present in the set of all performance attributes, loadings for spreadsheet, DBMS, communications,
that is, the explanatory variables of Table 1 [12]. and graphics software reveals a consistent clustering
Given highly correlated performance attributes, " I s of attributes across software types. The association
there a reduced set of attributes that would conceptu- of attributes with components, and the authors' inter-
alize user satisfaction with m i c r o c o m p u t e r pretation of their sources and suggested names or
software?" This question is addressed here by fac- labels are: function/feature component (basic and
toring the sample correlation matrix using principal advanced functions), service/support component
component analysis. We searched for a component (documentation and vendor support), and the imple-
structure, a smaller set of linear combinations of the mentation/friendliness component (ease of use and
average evaluations, that was capable of preserving training).
most of the information contained in the original It is interesting to note that a near identical com-
data. ponent structure emerged for word processing soft-
There are numerous acceptable criteria for deter- ware. The difference being that the function/feature
mining the number of components to be extracted. In component was split into two components, with each
this study, the authors have employed the percent- having high loadings for basic a n d / o r advanced
age of variance criterion. The factoring procedure functions. Given that no additional insights were
was stopped when the extracted components ac- gained into the structure of software attributes by
counted for at least 95 percent of the variance in the extracting a fourth component, that the cumulative
data (see Table 2). The eigenvalues of each compo- proportion of variance accounted for by three com-
nent are also shown in Table 2. ponents was 0.90, and that the authors' subsequent
Using the percentage of variance criterion, the analyses always found user satisfaction models with
appropriate number of components is three for all four explanatory variables (attributes) inferior to

Table 2
Principle components analysis
Components Word Processing Spreadsheets DBMS Communication Graphics
Eigen- Cumulative Eigen- Cumulative Eigen- Cumulative Eigen- Cumulative Eigen- Cumulative
values variance values variance values variance values variance values variance
(percent) (percent) (percent) (percent) (percent)
Component 1 3.27 0.54 4.93 0.82 4.25 0.71 4.96 0.83 4.16 0.69
Component 2 1.39 0.78 0.73 0.94 1.37 0.94 0.62 0.93 1.30 0.91
Component 3 0.74 0.90 0.15 0.97 a 0.17 0.97 a 0.27 0.98 a 0.28 0.96 a
Component 4 0.29 0.95 a 0.11 0.99 0.14 0.99 0.08 0.99 0.22 0.99
a The minimum number of components required to achieve a cumulative proportion of variance that is at least 0.95.
E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92 81

those with three, it was concluded that there were their application using the Datapro Research Corpo-
only three unique components present in word pro- ration data.
cessing as well. Before exploring these questions, consider the
fact that these data are the means o f aggregate
scores given by users for each of the attributes of the
5. Model specification and diagnostics ith product, i = 1, 2 . . . . , M, and that the mean
attribute rating for various products is based on
While principal component analysis suggested different sample sizes. For example, N i was nearly
three dimensions of end user satisfaction, it does not two hundred for PerfectWriter, while other word
allow the authors to determine the most important processing products had as few as three or four
performance attribute o f each component, nor the user-respondents. Given these conditions, Eq. (1)
relative importance of components. Depending on should be defined as a model of "average variables".
user environments and their applications of software, That is
the next methodological step involves forming a
causal relationship between user satisfaction and
software attributes. If the unit of observation is the
individual user, we may start with the following (2)
causal model:
where sample averages are noted by a bar "'-".
Sij = a~ B i j + a 2 Aij + a 3 Oij "k- a 4 Vij -b a 5 Eij Now, consider the first question from above. As
+ a 6 Tij + Elj, (l) is shown in Appendix B, the variance of the error
term varies inversely with the sample size for each
where Sij= overall satisfaction, B i i = basic func- product. Hence, Eq. (2) has a heteroscedastic error
tions, Aij = a d v a n c e d f u n c t i o n s , Dij = term, and ordinary least-squares estimates of parame-
documentation, V~j = vendor support, E u = ease of ters, while unbiased and consistent, will be ineffi-
use, ~j = training time and eu = a random error cient. Additionally, the estimated variances of the
term associated with the ith respondent and the jth parameters will be biased estimates of their true
product within a particular software type. a~, a 2, values. If the parameters of Eq. (2) are estimated by
.... a 6 are parameters to be estimated. It is assumed ordinary least-squares (OLS), in the presence of
that E(eij) = 0, and that heteroscedasticity, equal weight is given to observa-
tions with large error variances and small error vari-
tr i=k
E( eij ekj ) = 0 i v~ k. ances. This has serious implications, if left untreated,
because it may cause one to conclude that some
Readers should note that this particular set of attributes are important to performance, when they
explanatory variables is used in Eq. (1) because they are not, and vice versa.
were the attributes used by Datapro Research to There are essentially two approaches to solving
measure performance. They are used only to illus- this problem [31]. The first is to use weighted least-
trate the methodology. squares. The second is to transform the data such
A statistical assessment of software based on a that, under the transformation, the assumptions of
model such as Eq. (1) must consider three critical OLS are satisfied [13]. The authors have chosen the
questions: latter, as discussed in Appendix B.
• Are the assumptions regarding the error term We now consider the second question and the
justified? independence of explanatory variables. To form a
• Are the explanatory variables independent? model of user software satisfaction, we must know
• Are there unspecified attributes of software or its the number and identity of explanatory variables
usage that are omitted from the model? (performance attributes) to include in the relationship
The authors will next consider each of these for each product. Principal component analysis has
questions, prescribe diagnostic tests and illustrate answered the first question and given directions for
82 E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92

finding answers to the second. Specifically, three question from above) is that the important systematic
components were extracted for each software type factors and variables can all be represented in a
and the same two performance attributes were found single equation model. The authors believe that this
to be associated with each component, regardless of assumption must be validated as part of any sound
software type. Thus, user software satisfaction can methodology for software evaluation.
be " b e s t " modeled by three performance attributes, To illustrate the plausibility that important at-
one from each component. Therefore, it is not neces- tributes or factors of software usage are omitted from
sary to consider all possible linear equations consist- Eq. (2), consider the possibility that user-respondents
ing of three explanatory variables from a set of six, may own more than one type of software. Hence,
that is 120. Using the attribute groupings suggested their evaluations of products from one software type
by principal component analysis, the number of may be affected by their experiences with the prod-
equations that must be estimated and compared for ucts of other software types. For example, many
each software type is eight (2 × 2 × 2). The criterion users of spreadsheet software will also have compati-
used to determine which equation best modeled user ble graphics or database software. Hence, a priori
satisfaction was the minimum mean squared error. their overall satisfaction with any software type can-
It is important to note that the use of principal not be represented by a single equation, until it has
component analysis does not imply that the resulting been determined that their evaluations of software
three attribute equations will experience no multi- types are unrelated. These possible confounding ef-
collinearity in the original variables. Multicollinear- fects suggest that some attributes of one software
ity may still arise in estimating equations because type may be functions of the attributes of other
two or more of the three explanatory variables are software types. To model the user satisfaction of
highly correlated. Though parameter estimates would each software type as a single equation would ignore
be unbiased, their variances would be inflated and, these relationships and incur possible specification
as a result, highly unreliable. This could lead evalua- errors. Hence, user satisfaction should be modeled as
tors to false conclusions about which attributes best a simultaneous system of causal relationships. How-
explain user satisfaction. The authors will propose ever, the extent and nature of cross software relation-
later a methodology involving embedded recursive ships are unknown. Therefore we attribute these
equations for those software types with substantial unspecified causal relationships to the error terms of
correlation between two or more of the three at- each software type. If the average evaluations of
tributes [29,52]. various performance attributes are related across
Most statistically based evaluation models are software types, the error terms will be correlated,
single equation models. They have the advantage of and simultaneity will exist. Hence, the following
paucity and simplicity of estimation. Nevertheless, a model of user software satisfaction is proposed
critical argument in their justification (recall the third [31,63]:

Software Equation
Word Processing:

W'Sj= otwWXj, + ~wWXI2 + "YwW2~j3+ Wfxj, j = 1,2 . . . . . 23, (3)

Spreadsheet

+ sSX: + ,ssx 3 + sr . j = 1.2 . . . . . 13, (4)

DBMS:

DSj=aaD.gjl +~aDX~2 + YdD)fi3 + D ~ j , j = 1,2 . . . . . 18, (5)


E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92 83

Communications:

(6)

Graphics:

GSj=agG)~j, +~gGJ(j2+TgGXj3+G~x,, j=l,2 . . . . . 16, (7)

where " ~ " refers to the data transformed by f ~ j ; j below. Second, the estimated residuals from step one
refers to the jth product, respectively. ISj, is the are used to test the null hypothesis of correlated
overall satisfaction of users for software type, l ~ {W, errors. The results of the second step determine
S, D, C, G}, l)~jl . . . . . 1)~j3 are three of the six whether a third step is required. If correlated errors
performance attributes (explanatory variables) that are found, the estimated variance of parameters will
best account for the variation of satisfaction with be inflated, and we are more likely to make errors in
software type l, a~ /3~ "y~ are parameters to be hypothesis testing and in assessing the relative im-
estimated and l/2j are the error terms. portance of various software attributes. Therefore,
Using vector-matrix notation, Eqs. (3)-(7) may we would use Eq. (9) to estimate 8 in the third step.
be. expressed as Alternatively, if we fail to reject the null hypothesis
of correlated errors, the parameter estimates from
s: za + (8) single equation estimation are equivalent to those of
See Appendix C for a complete representation of Eq. Eq. (9) and step three is unnecessary.
(8). The variance-covariance matrix of /z can be
seen in Appendix D.
If X (see Appendix D) is not block diagonal, that
6. Model refinements and estimation
is, the error terms of Eqs. (3)-(7) are correlated, we
would estimate the parameter of 8 by
Under step one from above, we still must contend
8 = (Z' ~ - ' Z ) - ' Z ' ~ - ' S. (9) with what remains of the problem of multicollinear-
Alternatively, if we are unable to reject the null ity. Principal component analysis has provided a
hypothesis that o-,j = 0, i and j ~ {W, S, D, C, G} partial solution by reducing the attribute set from six
i 4: j, we estimate each equation by OLS. Indeed, if to three variables. Nevertheless, some of the possible
no simultaneity is found, the estimator 8 from Eq. combinations of three variables, one from each com-
(9) is identical to the one obtained by applying OLS ponent, may be highly correlated. There may be
to each equation one by one. causal relationships between various combinations of
The test criterion for rejecting the null hypothesis significant and insignificant performance attributes
is Anderson [6]. associated with different components. While the
components are orthogonal, the original variables
Irij[ (attributes) are not uncorrelated. Hence, for each of
2 . - - > tN_2 ( a ) , (10)
~/1 -- r/j the three attribute models represented by Eqs. (3)-
(7), the authors searched for all cases where signifi-
where N is the number of observations, rij is the cafg zero order correlation coefficients existed be-
sample correlation coefficient, i ~ j, and a is the tween two or more of the performance attributes
probability level. associated with any three of the different compo-
Testing for correlated errors involves several steps. nents. Where at least two attributes from different
In the first step, Eqs. (3)-(7) were estimated individ- components were correlated, one was regressed
ually using their associated recursive equation. The against the others to determine if a significant linear
specific form of the recursive equations is discussed relationship existed between them. The result is a
84 E.E. Anderson. Y.-M. Chen / Decision Support Systems 19 (1997) 75-92

Table 3 process of recursive estimation. That is, one of the


User satisfaction models for each software performance attributes, explanatory variables of the
Software Models user satisfaction equation, is expressed as an endoge-
nous variable in the recursive equation. If the esti-
Word Processing
mated parameters of the recursive equation were
statistically significant, the performance attribute in
~: is the estimate of k in the equation:
the user satisfaction model is replaced by the resid-
Wg= k Wg + ual of the recursive equation. The recursive equa-
Spreadsheets tions have the effect of filtering out the confounding
sg) effects between explanatory variables. This is impor-
1¢ is the estimate of k in the equation: tant because the resulting variables (attributes or
their surrogates) in the user satisfaction equations
will be more nearly orthogonal and, therefore, the
DBMS DSj = o~,lDDj 4- ~d DA, + YaDREj + D~,
multicollinearity in each equation will be substan-
DRF-.j= DF.j - (k" DD.,) tially reduced. Table 3 presents the final forms for
is the estimate of k in the equation: recursive and user satisfaction equations for each
= +
software type. Recall that the choice of performance
Communications = + + + attributes included in the user satisfaction equations
cfiuj = was based on the minimum mean squared error
1¢ is the estimate of k in the equation: (MSE) criterion. For the best set of attributes there
were frequently several feasible recursive equations.
c g = k. c g + % .
While the MSE of any user satisfaction equation is
Graphics GSj = ot~GBj + ,8~Gg + 3~GRg + G~j
constant for all associated recursive equations, the
GR~ = G ~ -(1¢" G~) estimated parameters may vary. The final choice of
is the estimate of k in the equation: recursive equation to employ with each of the user
GEj = k" GBj + GEj. satisfaction equations was based on the equation that
produced the highest level of statistical significance,

Table 4
Estimated parameters for each software a
Software Explanatory variables (attributes) t,
Basic function Documentation Advanced function Vendor support Ease of use Training time R2
0.6095 0.4006 0.2148
Word Processing 0.9993
(6.54) (3.99) (2.86)
0.5485 0.6118 0.4531
Spreadsheets 0.9998
(4.76) (4.06) (3.05)
0.4788 0.5639 0.3744
DBMS 0.9996
(8.56) (10.72) (4.09)
0.8414 0.7362 0.3421
Communications 0.9997
(7.80) (5.39) (2.68)
0.6501 0.3420 0.4759
Graphics 0.9997
(7.08) (3.45) (6.02)

a The model for each software is specified in Table 3; ( ) estimated t statistics.


b All parameter estimates are significant for ot = 0.01.
E.E. Anderson. Y.-M. Chen / Decision Support Systems 19 (1997) 75-92 85

measured by t statistics, for the estimated parameters application for the methodology presented. The spe-
of software attributes. cific statistical findings of this study, and their impli-
Employing the equations of Table 3, the residuals cations are left to the conclusions.
of each user satisfaction equation were estimated and Assuming that there is an adequate sample size
used to test for correlated errors between Eqs. (3)- and a careful application of the methodology, the
(7). Using the test criterion established by Eq. (10), authors suggest that readers (software evaluators)
the null hypothesis was accepted for all software consider the following questions in assessing
types at c~ = 0.01. The largest absolute value of any methodological applicability.
correlation coefficient found (0.014) was for the • How congruent are the technologies, user needs,
error terms between DBMS (Eq. (5)) and graphics and applications represented in the data relative
(Eq. (7)). In the absence of correlated error terms, to those o f the evaluator?
we may conclude that simultaneity is not present, This question is concerned with the representa-
and accept the single equation estimates of the pa- tiveness of the data relative to the evaluator's envi-
rameters shown in Table 3. Thus, step three is ronment. If one is using primary data, collected by
unnecessary and the estimates of 3 obtained from the evaluator, it argues for a carefully constructed
single equation estimation cannot be improved upon. sampling design that insures coverage of important
The parameters estimates, their standard errors, variables and factors in roughly the densities ob-
and R 2 for each of the user satisfaction equations served in the evaluator's environment. Alternatively,
shown in Table 3 are presented in Table 4. As if the data comes from secondary sources, answers to
measured by R 2, the software performance attributes this question must be used to condition the confi-
of each respective equation accounts for an unusu- dence that one has in statistical assessments.
ally large proportion of the variation in user satisfac- • H o w stable are the performance attributes over
tion. Second, all three of the variables in each equa- time?
tion are statistically significant for o~= 0.01. Third, Classes of technologies tend to have "time win-
there are both similarities and differences in the dows" over which they change slowly and have
relative importance of various performance attributes reasonably stable relationships with their environ-
between software types. For example, training time ments. The duration of these time windows varies
and vendor support are never statistically significant, between technologies and over time within technolo-
while ease of use and documentation are always gies. Software is no exception. While there is no
significant. Since these variables are all measured on prescription for assessing fundamental change in
the same scale, the interpretation of their relative software and its applications, users must be prepared
magnitudes, in a given equation, is as though the to limit their utilization of statistical findings when
variables had been standardized. Using that fact, one important change has occurred.
can further evaluate the contribution of various per- • How b r o a d l y ~ n a r r o w l y defined are the evalua-
formances attributes to user satisfaction. tor's decision criteria f o r software?
Evaluators have different needs for breadth/nar-
rowness in the definition and measurement of impor-
tant variables. Variables frequently can be parti-
tioned into additional dimensions, thereby adding to
7. Model applicability issues
the costs of data collection and modeling. Evaluators
must strive for the proper balance between the rele-
The authors have proposed a multi-step methodol- vance of variables, in the context of their evalua-
ogy for the statistical evaluation of software. From tions, and the volume and costs of data collection.
our application, it is evident that methodological To what extent are tradeoffs between software
options are influenced and, in some cases, deter- attributes acceptable in evaluations?
mined by the characteristics of the data. Though the Most statistical models of software evaluation are
data used in this paper were taken from secondary compensatory; that is, parameter estimates assign
sources, it is reasonable to consider the domain of weights to attributes such that a given score (value)
86 E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92

of user satisfaction may derive from many different Several interesting differences were found in the
profiles of attributes ratings. Hence, a high rating on relative contribution of attributes to user satisfaction
one attribute, which is not important in the evaluator's between software types. First, the recursive equa-
environment, may be used to offset a low rating on tions did not uniformly involve the residuals of the
another attribute that is important. same variables. For three software types, word pro-
Regarding the applicability of the specific find- cessing, DBMS, and graphics, ease of use was the
ings of this study, the authors believe that they endogenous variable in the recursive equation and
represent a reasonably broad coverage of microcom- documentation or basic functions was the explana-
puter software and their users. The authors believe tory variable. This result is intuitively appealing,
that these findings are limited, however, by the broad given the contribution of good documentation to ease
definition given to the attributes. These data are of use. Second, satisfaction with basic functions is
adequate to demonstrate the methodology, but the the most important source of user satisfaction with
attributes may not be sufficiently well focused and word processing, communications, and graphics soft-
precise to meet the needs of some evaluators. ware, and the second most important performance
attribute of spreadsheet products.
Third, the importance of good documentation is
8. Conclusions confirmed by these results. For four of the five
software types, it ranked as either the first or second
Buyers of microcomputer software are confronted most important performance attribute. Despite the
with an evaluation/choice problem that is analogous relatively poor statistical performance of the vendor
to that found in computer hardware, that is, they support and training time attributes, the authors do
would like to choose the product(s) that has (have) not believe that these attributes are unimportant on
the highest probability of satisfying their computing an absolute basis. They no doubt contribute to the
needs, given its cost. It is complicated by the multi- satisfaction of software users, but do not appear to be
plicity of products, variation in the performance priority attributes. Finally, as one might expect, the
levels of products, and the need uncertainties of advanced software functions are the most important
buyers. In this environment, buyers can frequently source of satisfaction for the users of DBMS. DBMS
benefit from the information derived from the expe- are more likely to be bought by the more sophisti-
rience and observations of users. In this paper, the cated users of microcomputers and their needs in
authors have proposed a methodology for evaluation data management may require the options and fea-
that identifies the most relevant attribute set and tures found in advanced functions.
determines the relative weight, or emphasis, that The authors believe that these results have impli-
should be given to each attribute based on user cations for researchers, users and vendors. We be-
evaluations. lieve that a more functional view of software assess-
From the analyses of this study, several interest- ment and evaluation is appropriate. Software satis-
ing results have been observed. Consider first those faction seems to derive from the functionality of
conclusions that apply to all software types. First, it software embedded in its features, ease of use and
appears that user software satisfaction is accounted support for learning and problem solving through
for by the following three factors: function/feature, documentation. The authors believe that more EUC
service/support, and implementation/friendliness. research should be devoted to studies of functional-
Second, ease of use and documentation were consis- ity. For example, very little research examines the
tently found to be two of the three most important properties and contributions to satisfaction of docu-
explanations for software satisfaction. Basic func- mentation, while volumes have been written, with
tions was also found to be a very important source of very little conclusive evidence, about every possible
user satisfaction in four of the five software types. intricacy of the individual, organization and environ-
Third, neither training time nor vendor support of ment.
software were ever important enough as explanatory It would seem that vendors should manage,
variables to represent one of the three factors. through design and innovation, the flow of services
E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92 87

from software. Software packages should be de- have been decoupled from the particular software
signed so that users can quickly develop skills in its package bought and its vendor. That is, there are
basic features and so that users can move to sophisti- numerous, alternative forms of training and support
cated applications with the support of advanced fea- within organizations, professional societies, and edu-
tures and good documentation. The needs and expec- cational institutions.
tations of users grow with knowledge and experi-
ence. As the range of expectations increases, vendors
must manage the pace of innovation. Software inno-
vation that precedes a "deeping" of the market may Acknowledgements
not be profitable, but if it is too slow, users may
become frustrated and switch to other products. The authors would like to gratefully acknowledge
Finally, we do not believe that training and ven- the helpful comments of anonymous reviewers and
dor support are unimportant, but rather that they the guest editor.

Appendix A. Questionnaire

Circle the word processing program you primarily use on your micro computer. (Circle one only.)
1. Micropro International Wordstar, 2000, 2000 +
2. Perfect Software PerfectWriter
3. Software Publishing Corporation PFS : Write
4. Sorcim/IUS EasyWriter II
5. Lexisoft Spellbinder
6. Multimate International Corporation Multimate
7. Peachtree Peachtext 5000
8. IBM DisplayWrite 2, 3
9. IBM Words Edition
10. IBM Writing Assistant
11. Leading Edge Products, Incorporated Leading Edge Word Processor
12. Lifetree Software, Incorporated Volkswriter (Deluxe) Scientific
13. Microsoft Corporation Microsoft Word
14. NBI, Incorporated NBI Word Processing
15. Samna Corporation Samna Word, Samna +
16. Mark of the Unicorn The Final Word
17. Metasoft Corporation Benchmark Word Processor
18. Select Information Systems, Incorporated Select
19. XyQuest, Incorporated XyWrite II
20. Apple Computer, Incorporated Macwrite
21. Innovative Software The Smart Word Processor
22. Satellite Software International WordPerfect
23. Office Solutions OfficeWriter
24. Other
88 E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92

How would you rate this word processing program with respect to:
Poor Circle one for each Excellent
Basic functions 1 2 3 4 5 6 7 8 9 10
Documentation 1 2 3 4 5 6 7 8 9 10
Advanced functions 1 2 3 4 5 6 7 8 9 10
Vendor support 1 2 3 4 5 6 7 8 9 10
Easeofuse 1 2 3 4 5 6 7 8 9 10
Training time 1 2 3 4 5 6 7 8 9 10
Cost and performance 1 2 3 4 5 6 7 8 9 10
Overall satisfaction 1 2 3 4 5 6 7 8 9 10

Source: User Ratings of Microcomputer Software, Microcomputer Software, Datapro Research Corporation,
March 1986 [60].

Appendix B

The error term of Eq. (2) may be expressed as


ENJ
i = 1Eij
Uj
Its expected value equals zero, but its variance is not constant, VAR(~j) = ~r2/Ni. Hence, the variance of ~:j,
would vary inversely with Nj and heteroscedasticity would be present. Therefore, the parameters estimated Eq.
(2) by ordinary least-squares, without adjustments to the data, would lead to inefficient estimates, i.e., the
variances of the estimated parameters are not the minimum variances.
One approach to this problem is to transform the data so that the assumption of a constant error variance is
satisfied. If we premultiply Eq. (2) by ~-~,

then
I
I
ac
i
o~ i
II
t I t
o.° ~--
i
II
I I
i
J i
I.
i u
It
II
I
t
t
i I
f~
L~
r
I i i
r I1 II
II
90 E.E. Anderson, Y.-M. Chen/ Decision Support Systems 19 (1997) 75-92

Appendix D

-Wfz Wf~' Wfx Sfz' Wfx Dft' wpcp'


Sfz Wf_t' Sfx S~' Sfz D~' sp cp'
E tx I~ = Dfz Wfz' Df~ S~' D~ Dfx' D~ C~' Dfz Gfx'
C fz W fx' C fz S fz' C fz D ~t' cp c~' Cp Cp'
GpWp' ap Sp' ap Dp' G~ C~'
-o'w,,l o'.,~I ~rw,tl O'w,,l
6r+.wl ~.s l Osdl O'~cl o-,,gl
O'awl Odsl ~ra,tl O'dcl Ordgl
O'cwl o-,.~I o-c~l O-ccl ~rce l
o'~ ~ I o'~,~I o,e,t l ~rg,. l %e !

References Code Metrics, Journal of Systems and Software 12, No. 3


(July 1990) 263-69.
[1] A.D. Adams, R. Nelson and P.A. Todd, Perceived Useful- [13] J.S. Cramer, Efficient Grouping, Regression and Correlation
ness, Ease of Use, and Usage of Information Technology: A in Engel Curve Analysis, Journal of the American Statistical
Replication, MIS Quarterly 16, No. 2 (June 1992) 227-247. Association 59 (March, 1964) 233-250.
[2] A.H. Agajanian, A Bibliography on System Performance [ 14] T.P. Cronan and D.E. Douglas, End-User Training and Com-
Evaluation, Performance Evaluation Review 5, No. 1 (Feb. puting Effectiveness in Public Agencies, Journal of Manage-
1976) 53-64. ment Information Systems 6, No. 4 (Spring 1990) 21-40.
[3] D.L. Amoroso, Using End User Characteristics to Facilitate [15] F.D. Davis, Perceived Usefulness, Perceived Ease of Use,
Effective Management of End User Computing, Journal of and User Acceptance of Information Technology, MIS Quar-
End User Computing 4, No. 4 (Fall 1992) 5-15. terly 13, No. 3 (September, 1989) 319-340.
[4] D. Amoroso, and P. Cheney, A Report on the State of [16] F.D. Davis, R.P. Bagozzi and P.R. Warshaw, User Accep-
End-User Computing in Some Larger North American Insur- tance of Computer Technology: A Comparison of Two Theo-
ance Firm, Journal of Information Management 8, No. 2 retical Models, Management Science 35, No. 8 (August,
(Spring 1987) 39-48. 1989) 982-1003.
[5] E.E. Anderson, A Heuristic for Software Evaluation and [17] S.A. Davis and R.P. Bostrom, Training End Users: An
Selection, Software: Practice and Experience 19, No. 8 (Aug. Experimental Investigation of the Roles of the Computer
1989) 707-717. Interface and Training Methods, MIS Quarterly 17, No. 1
[6] T.W. Anderson, An Introduction to Multivariate Statistical (March 1993) 61-81.
Analysis, 2nd edition (Wiley and Sons, New York, 1984). [18] DOD-STD2168: Defense System Software Quality Program,
[7] J.E. Bailey and S. Pearson, Development of a Tool for Military Standard of the Department of Defense of the
Measuring and Analyzing User Satisfaction, Management United States of America, Draft, April 1987.
Science 29, No. 5 (1983) 530-545. [19] W.J. Doll and M.U. Ahmed, Documenting Information Sys-
[8] B. Beizer, Quality is Not the Goal!, American Programmer, tems for Management: A Key to Maintaining User Satisfac-
No. 6 (June 1993) 5-11. tion, Information and Management 8, No. 4 (1985) 221-226.
[9] H. Bidgoli, DSS Products Evaluation: An Integrated Frame- [20] W.J. Doll and G. Torkzadeh, The Measurement of End-User
work, Journal of Systems Management, No. 40 (November, Computing Satisfaction, M1S Quarterly 12, No. 2 (June
1989) 27-34. 1988) 259-276.
[10] R.P. Bostrom, L. Olfman and M. K. Sein, The Importance of [21] W.J. Doll and G. Torkzadeh, The Quality of User Documen-
Learning Style in End-User Training, M1S Quarterly 14, No. tation, Information and Management 12, No. 2 (1987) 73-78.
1 (March, 1990) 101-119. [22] I. Eriksson and A. TSrn, A Model For IS Quality, Software
[11] P.H. Cheney, R.I. Mann and D.L. Amoroso, Organizational Engineering Journal, No. 6 (July 1991) 152-158.
Factors Affecting the Success of End-Users Computing, [23] A. Eskenasi, Evaluation of Software Product Quality by
Journal of Management Information Systems 3, No. 1 Means of Classification Methods, Journal of Systems and
(Summer, 1986) 65-80. Software 10, No. 3 (Oct. 1989) 213-16.
[12] D. Coupal and P.N. Robillard, Factor Analysis of Source [24] C.R. Franz and D. Robey, Organizational Context, User
E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92 91

Involvement, and the Usefulness of Information Systems, [45] R.K. Rainer, Jr. and A.W. Harrison, Toward Development of
Decision Sciences 17, No. 3 (Summer 1986) 329-356, the End User Computing Construct in a University Setting,
[25] C.W. Frenzel, Management of Information Technology (Boyd Decision Sciences 24, No. 6 (November/December 1993)
and Fraser, Boston, 1992). 1187-1202.
[26] N.C. Goodwin, Functionality and Usability, Communications [46] D.W. Rasmussen and T.W. Zuehlke, On the Choice of
of the ACM 30, No. 3 (March 1987) 229-233. Functional Form for Hedonic Price Function, Applied Eco-
[27] J.D. Gould and C. Lewis, Designing for Usability: Key nomics 22, No. 4 (April 1990) 431-38.
Principles and What Designers Think, Communications of [47] S. Rivard and S.L. Huff, Factors of Success for End-User
the ACM 28, No. 3 (March 1985) 300-311. Computing, Communications of the ACM 31, No. 5 (May
[28] G.I. Green and C.T. Hughes, Effects of DSS Training and 1988) 552-561.
Cognitive Style or Decision Process, Journal of Management [48] D. Roby, User Attitudes and Management Information Sys-
Information Systems 3, No. 2 (Fall 1986) 83-93. tem Use, Academy of Management Journal 22, No. 3 (Sep-
[29] W.H. Greene, Econometric Analysis (Macmillan, New York, tember 1979) 527-538.
1990). [49] G.C. Roper-Lowe and J.A. Sharp, The Analytic Hierarchy
[30] A.W. Harrison and R.K. Rainer, Jr., The Influence of Indi- Process and its Application to an Information Technology
viduals Difference on Skill in End-User Computing, Journal Decision, Journal of the Operational Research Society 41,
of Management Information Systems 9, No. I (Summer No. I (January 1990)49-59.
1992) 93-111. [50] A. Rushinek and S. Rushinek, Mini/Micro Computer Evalu-
[31] H. Hwang, Estimation of a Linear Sur Model With Unequal ation of System Features: An Empirical Discriminant Model
Number of Observations, Review of Economics and Statis- of Software and Hardware Expendability, Compatibility,
tics 72, No. 3, 510-515. Costefficiency, Installation and Delivery, Managerial and
[32] ISO-8402: Quality Vocabulary, International Standardization Decision Economics 5, No. 3 (June 1984) 150-159.
Institute (1986). [51] A. Rushinek and S. Rushinek, What Makes Users Happy?,
[33] B. Ives, M.H. Olson and J.J. Baroudi, The Measurement of Communications of the ACM 29, No. 7 (July 1986) 594-598.
User Information Satisfaction, Communications of the ACM [52] L. Sachs, Applied Statistics: A Handbook of Techniques
26. No. 10 (October 1983) 785-793. (Springer-Verlag, New York, 1984).
[34] M. Kletke, J.E. Trumbly and D.L. Nelson, Integration of [53] A.H. Segars and V. Grover, Re-Examining Perceived Ease of
Microcomputers into the Organization: A Human Adaptation Use and Usefulness: A Confirmatory Factor Analysis, MIS
Model and the Organizational Response, Journal of Micro- Quarterly 17, No. 4 (September 1993) 517-525.
computer Systems Management 3, No. 1 (Winter 1991) [54] M.K. Sein, R.P. Bostrom and L. Olfman, Training End Users
23-35. to Compute: Cognitive, Motivational and Social Issues, IN-
[35] K.E. Knight, A Functional and Structural Measurement of FOR 25, No. 3 (1987) 236-255.
Technology, Technological Forecasting and Social Change [55] D.W. Straub, Validating Instruments in MIS Research, MIS
27, No, 2/3 (May 1985) 107-127. Quarterly 13, No. 2 (June 1989) 147-170.
[36] H.C. Lucas, Performance and the Use of an Information [56] B. Szajna and R.W. Scamell, The Effects of Information
System, Management Science 21, No. 8 (April 1975) 908- System User Expectations on Their Performance and Percep-
919. tions, MIS Quarterly 17, No. 4 (December 1993) 493-514.
[37] G.S. Maddala, Econometrics (McGraw-Hill, New York, [57] G. Torkzadeh, The Quality of User Documentation: An
1988). Instrument Validation, Journal of Management Information
[38] R.A. Mata-Toledo and D.A. Gustafson, A Factor Analysis of Systems 5, No. 2 (Fall 1988) 99-108.
Software Complexity Measures, Journal of Systems Soft- [58] G. Torkzadeh and W.J. Doll, Test-Retest Reliability of the
ware, No. 17 (1992) 267-273. End-User Computing Satisfaction Instrument, Decision Sci-
[39] N.P. Melone, A Theoretical Assessment of the User-Satisfac- ence 22, No. 1 (Winter 1991) 26-37.
tion Construct in Information Systems Research, Manage- [59] J.A. Turner, Computer Mediated Work: The Interplay Be-
ment Science 36, No. 1 (January 1990) 76-91. tween Technology and Structural Jobs, Communications of
[40] H.W. Miller, Quality Software: The Future of Information the ACM 27, No. 12 (December 1984) 1210-1217.
Technology, Journal of Systems Management, No. 40 (Dec. [60] User Ratings of Computer System, Microcomputer Software
1989) 8-14. (Datapro Research Corporation, Delran, NJ, 1986).
[41] E. Mumford and I. Banks, The Computer and the Clerk [61] J. Wesselius and F. Ververs, Some Elementary Questions on
(Routledge and Kegan Paul, London, 1967). Software Quality Control, Software Engineering Journal, No.
[42] M.C. Munro, S.L. Huff and G. Moore, Expansion and Con- 5 (November 1990) 319-330.
trol of End-User Computing, Journal of Management Infor- [62] M.S. Wu, Selecting the Right Software Application Package,
mation Systems 4, No. 3 (Winter 1987-1988) 5-27. Journal of Systems Management 41 (September 1990) 28-32.
[43] R.R. Nelson and P.H. Cheney, Training End Users: An [63] A. Zellner, An Efficient Method of Estimating Seemingly
Exploratory Study, MIS Quarterly 11, No. 4 (December Unrelated Regressions and Tests for Aggregation Bias, Jour-
1987) 547-559. nal of the American Statistical Association 57 (1962) 348-
[44] S.I. Nesbit, Evaluating Micro Software, Datamation 30, No. 368.
11 (July, 15 1984)74-78.
92 E.E. Anderson, Y.-M. Chen / Decision Support Systems 19 (1997) 75-92

Evan Anderson is the GMU Foundation Professor and Yu-Min C h e n is the Senior Statistician at JCPenny Com-
Director of Technology Management in the Gmduale Busi- pany Inc. in Plant, TX. His research interests include time-
ness Institute at George Mason University. He is the Director series, forecasting methods, applied econometrics, choice
of the IT-Management Consortium and a member of the modelling, and neuro networks. He received his Ph.D. from
Institute for Computational Sciences and lnformatics. He Purdue University.
received his Ph.D. from Comell University and help prior
faculty and administrative appointments at Tulane Univer-
sity, Vanderbilt University, and the University of Texas,
Dallas. He has been a Visiting Scholar at the University of
Chicago and a Senior Member of St. Antony's College,
University of Oxford. His teaching and research interests
include economics of computing and information technolo-
gies, management of IT businesses, and international politi-
cal economy. His papers have appeared in Journal of Busi-
ness (University of Chicago), Operations Research, Manage-
ment Science, Accounting Review, Managerial and Decision
Economics, liE Transactions, MIS Quarterly, Naval Re-
search Logistics, Journal of Management Information Sys-
tems, and others.

S-ar putea să vă placă și