Sunteți pe pagina 1din 18

ORGANIZATIONAL

Stanton, Rogelberg / INTERNET/INTRANET


RESEARCH METHODSWEB PAGES

Using Internet/Intranet Web Pages to Collect


Organizational Research Data

JEFFREY M. STANTON
STEVEN G. ROGELBERG
Bowling Green State University

Wide availability of networked personal computers within organizations has en-


abled new methods for organizational research involving presentation of research
stimuli using Web pages and browsers. The authors provide an overview of the
technological challenges for collecting organizational data through this medium
as a springboard to discuss the validity of such research and its ethical implica-
tions. A review of research comparing Web browserbased research with other ad-
ministration modalities appears to warrant guarded optimism about the validity of
these new methods. The complexity of the technology and researchers relative un-
familiarity with it have created a number of pitfalls that must be avoided to ensure
ethical treatment of research participants. The authors highlight the need for an
online research participants bill of rights and other structures to ensure success-
ful and appropriate use of this promising new research medium.

Between 1981 and the present, organizations in the United States have moved from
centralized computing facilities to networks of interconnected personal computers
(Abbate, 1999). These technology changes can have far-reaching implications for
organizational research, including reduced research costs, enlarged sample sizes,
shortened data collection-analysis-presentation cycles, improved access to previously
hard-to-reach populations (e.g., the disabled), and enhanced interactivity of research
materials. With these technological opportunities, however, come new challenges
associated with designing, conducting, and interpreting research results. This article
can serve as a road map to guide organizational researchers through unique issues
encountered when planning and implementing research using Web pages on the
Internet or an intranet.

Authors Note: We thank Lilly Lin and Alexandra Luong for their assistance in the literature search con-
ducted for this article. Development of the ethics section of this manuscript was supported in part by award
SES9984111 to the first author from the National Science Foundation. Correspondence concerning this arti-
cle should be addressed to Jeffrey M. Stanton, School of Information Studies, Center for Science and Tech-
nology, Syracuse University, Syracuse, NY 13244-4100; e-mail: jmsbgsu@hotmail.com.
Organizational Research Methods, Vol. 4 No. 3, July 2001 200-217
2001 Sage Publications
200
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 201

Scope
In this article, we focus on the use of Web pagebased instruments for data collec-
tion. This focus provides a complement to some of the valuable, previously published
work on e-mail-based surveys (e.g., Simsek & Veiga, 2000) and on other uses of e-mail
for research (Anderson & Gansneder, 1995; Coomber, 1997; Schaefer & Dillman,
1998; Swoboda, Muhlberger, Weitkunat, & Schneeweiss, 1997). Using Web pages
and browsers for data collection in management and industrial-organizational
research can occur either over the Internet or an organizations intranet. The Internet, a
consortium of interconnected public and private computer networks, is notable for its
accessibility to millions of computer users around the world. In contrast, intranets, the
other foci of our article, are private networks administered by organizations and usu-
ally accessible only to organizational members. Organizational research conducted
via the Internet and via intranets share broad parameters while differing in certain
details. Although the differences do have minor methodological implications (e.g.,
intranets typically serve known populations), the major methodological challenges
discussed here are generally common to both modalities. It is also worth mentioning
that the scope of this study does not include the Internet as a target of study in and of
itself (e.g., as a set of sociological phenomena, see Jones, 1999).

Examples and Issues: Setting the Stage


Networked, browser-based research is highly flexible and can take many different
forms. Not only can any paper-and-pencil study appear on a browser, but the medium
can also provide unique, new opportunities for a researcher. Consider the following
three examples:
Example 1. A human resources department needs to make a quick decision about a
new benefits plan. To uncover concerns and preferences, they publicize a brief Web
survey on the corporate intranet. Employee survey responses are automatically
entered into a database. Periodically, the software on the server automatically sends
statistics and response rate information to the project leader in the human resources
department. After 3 days, all data have been collected, analyzed, and distributed to
decision makers.
Example 2. Researchers from a professional industry consortium are interested in
the identification of workplace barriers confronting sight-impaired individuals. They
develop a Web-based research instrument to be compatible with pwWebSpeak and
similar products that enable Web browsing for the sight impaired. They advertise their
research in newsgroups and listprocs for the disabled. In addition, the researchers send
dozens of e-mail chain letters to colleagues and contacts, asking them to inform sight-
impaired workers about the research. Results of the questionnaire will provide infor-
mation for a white paper that the consortium will publish to assist its members with
ADA (Americans With Disabilities Act) compliance and employee retention.

Example 3. University researchers are contracted to conduct a field experiment


examining how managers in a particular organization use and interpret 360-degree
feedback results. The field experiment is implemented on a customized Web server
that randomizes presentation of experimental stimuli to participants (e.g., different
202 ORGANIZATIONAL RESEARCH METHODS

Figure 1: Schematic Overview of Networked Research Procedures

360-feedback reports). Study instructions are presented to the participant via stream-
ing video. Furthermore, based on the demographic profile reported by the participant,
the server tailors the types of questions asked about feedback experiences. The
researchers solicit participation by providing the address of the research Web site in
the organizations monthly newsletter. The researchers must ensure, however, that
only managers respond to the study, so they capture the Internet protocol (IP)
addresses of participants and verify them against a list of IP addresses for all managers
in the organization.
Although by no means exhaustive of the possibilities for networked research, these
examples serve to point out the enormous potential of networked research such as
rapid access to study populations, enhanced interactivity of research materials, and
specialized access controls. At the same time, these examples also contain some of the
challenges of browser-based research. The challenges generally fall into one of three
interrelated categories: challenges of actually collecting data using new technology,
validity concerns, and ethical considerations. The remainder of this article addresses
these issues. Within each section, we examine available literature from the social sci-
ences that addresses relevant concerns. As an overview of how the sections intercon-
nect, as well as to provide the reader with the big picture, Figure 1 presents a sche-
matic diagram of the networked research process.

Data Collection Challenges


Data collection using a Web browser as the medium for presentation of stimulus
materials can require extensive knowledge of client-server computing and related
technological issues. A detailed treatment of these issues is beyond the scope of this
article. Instead, we take a breadth over depth approach and attempt to sensitize
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 203

researchers to some key challenges indigenous to networked research. Where possi-


ble, we provide references to more extensive treatments of the various topics. The top-
ics covered in this section address three common issues: (a) constructing and posting
materials; (b) controlling access, authentication, and multiple response; and (c)
encouraging participation in networked research. Notably absent from this list is a dis-
cussion of sampling and recruiting. However, a detailed review of probability and
nonprobability sampling and recruiting strategies for networked research recently
appeared in Simsek and Veiga (2000) as well as Schillewaert, Langerak, and Duhamel
(1998).

Posting and Constructing Materials

Researchers who wish to present networked, browser-based stimulus materials


need hardware and software to fulfill the role of an Internet/Intranet server (see
Schmidt, Hoffman, & MacDonald, 1997). The server stores the research materials and
transmits them over the network on request. Server software can also be programmed
to customize research materials and to receive data back from the client. A researchers
home organization may already maintain Web servers, but many Web hosting com-
panies also exist to provide these services (e.g., Yahoo). Schmidt (1997) provided a
general introduction to the hardware and software requirements of networked
research, and Schmidt et al. (1997) described particulars of implementing ones own
Web server.
In addition to server facilities, researchers need methods for authoring stimulus
materials and creating Web sites that provide the context for presenting the materials.
Books such as Sams Teach Yourself HTML 4 in 24 hours (Oliver, 1999) and Web sites
such as WebMonkey (http://www.webmonkey.com) provide extensive tutorial and
reference information on Web site development. Supovitz (1999) described a case
study of Web-based research from which he extracted eight lessons pertaining to the
design and implementation of an effective Web site for research. Many commercial
platforms exist for creating and distributing online surveys (King, 2000, contains a
recent review). Furthermore, Morrow and McKee (1998) discussed strategies for mak-
ing online research materials interactive.
When designing Web materials, it is of critical importance to pilot test research
materials, in part because materials may function as expected on one platform but
not on another. In addition, it is important to ensure that respondents expertise with
browsers and Web-based materials matches or exceeds the sophistication and com-
plexity of the research instrument (for examples of problems in this area, see Bertot &
McClure, 1996; Tse, 1998).

Access Control, Authentication, and Multiple Response

In traditional research, gathering participants consists of contacting potential par-


ticipants by mail, telephone, or face-to-face meeting and requesting participation in
the research. Often a copy of the research materials accompanies the invitation. To our
knowledge, researchers rarely express much concern that a participant will photocopy
paper research materials and send them to new respondents. In contrast, recruiting
techniques available in networked research, together with the ephemeral, electronic
nature of the research materials, facilitate substantial opportunities for unsolicited
204 ORGANIZATIONAL RESEARCH METHODS

responses in browser-based research. For example, recruiting messages sent by e-mail


to potential participants can easily be forwarded to other individuals or lists of individ-
uals. When the URL of a Web survey appears in an e-mail message, the problem wors-
ens. Convenience or plausibility samples obtained through public advertising of a
study have even larger potential to attract unwanted participants.
As a result of how easily research requests and materials can be copied and redis-
tributed, two key aspects of networked research design are access control and authen-
tication. Access control refers to methods for controlling use of computer resources
(such as files stored on a server). Authentication refers to verification of the identity of
a particular computer user. Authentication is often paired with access control in situa-
tions where successful authentication grants particular access privileges. Access con-
trol and authentication can be preventative in the sense that unwanted research partici-
pants can be disallowed from gaining access to or submitting research materials.
Alternatively, authentication can be used to identify and remove their responses from
the data as a first step in data analysis.
In the simplest case of research on and within an entire organization, the use of an
organizations intranet often precludes responses from those outside the organization
(for an overview, see Gilmore, Kormann, & Rubin, 1999). In all other research scenar-
ios, unless convenience sampling is in use, researchers must have an access control or
authentication strategy to prevent unwanted responses. Passive controls may suffice in
some instances: If the research instrument is long and pertains to a very specific audi-
ence, there may be little motivation for a nonsampled individual to respond. In this
case, simply stating who is eligible to participate may be sufficient to eliminate
unwanted responses.
A standard strategy for authentication and access control is a respondent identifier
or password. Schmidt (2000) provided a detailed technical introduction to the imple-
mentation and use of passwords on Web-based research materials. Note that unless
participants perceive the processes involved in allocating passwords or identifiers as
random, they may have substantial concerns about anonymity, which may, in turn,
modify response patterns. Filtering can provide authentication after data collection by
checking passwords, reported demographic characteristics (e.g., removing males), or
IP addresses (unique identifiers of network locations). Responses not matching a mas-
ter list of legal identifiers are set aside or discarded.

Multiple responding. Another aspect of avoiding unwanted participation is prevent-


ing multiple responses from the same individual. When an individual is asked to par-
ticipate in a research project, it is hoped that he or she will submit only one set of data
for the study in question. Multiple response can be inadvertent or purposeful. In inad-
vertent multiple response, an individual may accidentally activate the submit con-
trol more than once. Alternatively, the individual may mistakenly submit data prema-
turely and thus may resubmit after making a more complete response. In purposeful
multiple response, an individual knowingly responds multiple times to a research
request. This type of multiple response may be done to skew the findings in a particular
direction or to sabotage the research effort. Purposeful multiple responses provide a
greater challenge for data screening and analysis because the malicious respondent
may submit slightly different data each time to prevent simple filtering of duplicates.
Overcoming purposeful multiple responding can be difficult, but the following
strategies can be used during the data collection process to reduce multiple respond-
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 205

ing. First, when recruiting participants for a study, avoid motivating individuals to
engage in malicious response. Specifically, avoid extensive cross-postings of research
announcements and frequent e-mail participation reminders that may be perceived as
intrusive (Cho & LaRose, 1999; Michalak & Szabo, 1998). Second, include explicit
instructions outlining the need for one and only one response to the research request.
Stress the importance and meaningfulness of the research. Third, design the Web site
so respondents receive a confirmation query (e.g., Are you sure you want to submit
your responses?) at the moment they submit their data. The server should also provide
immediate acknowledgment that data have been received. Fourth, using one of the
authentication strategies described above, one can assign each respondent a unique
identification code to submit with the research data.
After data collection, one can filter out multiple responses as the first step in data
analysis. An alternative to passwords is to use the IP address from the client computer
to flag multiple submissions from the same address. This technique is subject to false
positives because different people responding from the same computer may have the
same IP address attached to their submission. Augmenting server software to record
the time of day would allow filtering if data came from the same IP address and were
submitted within a brief period. Without unique identifiers or IP addresses, one can
still screen for identical submissions by matching demographics and/or research
responses. Malicious multiple responses not containing identical data are sometimes
noticeable as distributional anomalies (e.g., extreme outliers). When multiple identi-
cal responses are detected in the data, a researcher can typically take one of three
actions: drop all data in the multiple-response group, accept only the first submission
in the group, or accept only the last submission in the multiple-response group.

Encouraging Participation in Networked Research

In contrast to concerns for preventing responses from nonsampled individuals and


multiple responses from others, researchers always hope for the highest possible
response rate among sampled individuals to mitigate nonresponse bias (Luong &
Rogelberg, 1998; Rogelberg & Luong, 1998; Smart, 1966). Schwarz, Groves, and
Schuman (1998) listed standard methods of boosting response rates in traditional
research: advance notice, incentives, persuasive scripts or introductions, and reminder
notices. Each strategy has a counterpart in networked research. Witmer, Colman, and
Katzman (1999) uncovered evidence that advance notice of an e-mail-based study,
combined with the opportunity to decline participation in the research, helped to
enhance response rates. Anderson and Gansneder (1995) used a mail-merge strategy
to personalize each recruiting message. Teo, Lim, and Lai (1997) used $2 phone cards
as an incentive to encourage early response to their Web-based questionnaire instru-
ment. In Musch and Reipss (2000) review of the history of Web-based experimenta-
tion, they located 10 studies that offered lotteries from $10 to more than $1,000.
Another possibility involves providing the respondents with immediate general or per-
sonalized feedback concerning their research participation, an option that clearly
requires custom server programming. Researchers may also facilitate response by
maintaining the novelty of online data collection by designing diverse, intriguing, and
interactive Web pages. Finally, perhaps the best way to facilitate response to a present
research effort is to develop a credible track record (data from past research efforts
206 ORGANIZATIONAL RESEARCH METHODS

have been acknowledged and used responsibly and appropriately) with potential
respondents (Rogelberg, Luong, Sederburg, & Cristol, 2000).
Before closing our discussion of response facilitation, it is worth briefly discussing
the notion of calculating response rates. Even with a known list of sampled respon-
dents, it may be difficult to know the number actually contacted because of deactivated
addresses, server errors, and other problems that interfere with the delivery of online
recruiting messages. Without knowledge of the number of individuals that received a
recruiting solicitation, it can be difficult to ascertain the response rate obtained. Not
knowing the response rate makes it difficult to assess potential nonresponse bias. Kaye
and Johnson (1999) made recommendations for estimating the response rate for net-
worked research when it cannot be directly calculated. First, servers keep a log of the
number of times each Web page was requested. By comparing this log with the number
of completed research responses actually received, one can make a reasonable esti-
mate of active refusals: individuals who saw the research materials (or at least the first
page of instructions) and then declined to participate. In using this technique, it is
important to use the servers counts of sessions rather than hits because a page
might be requested multiple times (hits) by one browser in the course of that users
total time reviewing the research materials (a session). See Bosnjak (2000) for an
application of this technique. Another strategy when using e-mail as a recruitment
medium is to explicitly ask refusals to decline participation by replying to the solicita-
tion with a response indicating their refusal. The clear problem with both of these latter
strategies is that they ignore the possibly large set of sampled individuals who simply
ignore the solicitation and do nothing.
One of the important messages we hope to have communicated by raising all of the
issues described in this section is that the technology involved in browser-based
research adds a layer of complexity to the research process that many investigators
have not previously encountered. This added complexity clearly necessitates a learn-
ing period during which researchers become more comfortable and familiar with the
application of the new technology. More important, however, we suggest that the
added complexity of browser-based, networked research creates new threats to the
validity of research results and new challenges for the ethical treatment of research
participants. In the following sections, we highlight the implications of the technology
challenges highlighted above for validity and ethics.

Generalizability of Networked Research Data


Differential access to computers, method bias, and response environment effects
may adversely influence the generalizability of a networked study (Krantz & Dalal,
2000). In this section, we discuss each of these issues. It is important to note from the
beginning, however, that the best way to address generalizability concerns is by cross-
validating results using different research modalities. Cross-validation is especially
important when the research results are mission critical (e.g., when legal
defensibility is a concern). Although samples obtained through networked means may
stand on their own in exploratory research, when results are mission critical, cross-
validation of networked results with more traditional research techniques becomes a
necessity.
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 207

Bridging the digital divide. To promote generalizability, researchers ideally want


their sample to mirror, demographically, the theoretical population of interest (Oakes,
1972). The digital divide is a term referring to the relative lack of access of low-income
individuals to the Internet. We borrow the term here to capture the idea that individuals
in different occupations within an organization may have different degrees of access to
networked computers at work or home. Researchers (e.g., B. Thomas, Stamler,
Lafreniere, & Dumala, 2000) have reported substantial demographic skews when col-
lecting data pertaining to specific populations. Yost and Homer (1998) found substan-
tial differences in responses to a job attitude survey based on administration type (Web
vs. paper and pencil) until they controlled for job type. These results indicate that orga-
nizational researchers must remain sensitive to the fact that within most organizations,
there are employees who do not have sufficient access to technology to respond to net-
worked studies.
Table 1 reveals that one of the most common divergences between samples obtained
over the Internet and samples obtained through traditional research techniques per-
tains to differences in the demographic composition of samples. In some cases, the
effect sizes of these demographic differences are substantial. Within organizations,
access to the Internet or the organizations intranet appears to vary with occupational
type (Graphics and Visualization Unit [GVU], 1998). Workers in manufacturing jobs,
face-to-face customer service jobs, physically mobile jobs, and other positions not
commonly including computer work may have limited access to computers, browsers,
and the organizations intranet. GVU surveys have consistently indicated skews
toward overrepresentation of males, Whites, and North Americans as Internet users. It
is noteworthy, however, that in Table 1, these typical age and gender skews among
Internet respondents do not universally hold (also see Pettit, 1999). Recent data sug-
gest that gender skew is gradually being mitigated and that individuals from Europe
and other regions are gradually gaining in proportional representation (Nielsen
Netratings, 2000). Nonetheless, these findings all argue for the collection of compari-
son samples whenever demographic variables are suspected to influence a studys
focal variables. When demographic variables are related to the focal variables,
nonresponse by a particular demographic group will lead to nonresponse bias and thus
generalizability concerns (Rogelberg & Luong, 1998). For research conducted within
a single organization, testing for demographic skew can easily occur using databases
of personnel information. In samples obtained from the Internet at large, testing for
such demographic skews can be accomplished using databases of demographic infor-
mation such as the General Social Survey or profiles of Internet users such as the
annual survey conducted by GVU. Atchison (1998) used this strategy with some
degree of success in his dissertation.

Method effects: comparisons of networked and traditional research. General-


izability of networked data can be compromised when the modality used to collect the
data systematically alters the data. A substantial literature exists that compares com-
puter administration of various types of tests with paper-and-pencil administration.
Importantly, a great deal of the literature in this area is quite old and substantially pre-
dates the use of Web browsers (e.g., Kiesler & Sproull, 1986; McBrien, 1984), which
may limit its applicability to browser-based research. Nonetheless, method effects
(such as faking and socially desirable responding) documented in previous research
may be exacerbated by Web-based administration (e.g., Zickar, 2000). Responses to
208

Table 1
Effect Sizes of Demographic and Substantive Differences Between Internet and Non-Internet Samples
Citation Research Design Differences Between Samples

Birnbaum (1999) Ran decision-making experiments with two samples (n = 1,224) Age: 2 = .19 (Internet: Older)
Internet respondents and (n = 124) undergraduate respondents Gender: 2 = .01 (Internet: More males)
Substantive conclusions identical
Buchanan and Compared (n = 963) Internet respondents with No mean differences detected
Smith (1999) (n = 224) paper-and-pencil respondents to a personality Confirmatory factor analysis fit better in
survey (self-monitoring) Internet sample
Krantz, Ballard, Obtained (n = 573) Internet responses and (n = 233) Age: 2 = .47 (Internet: Older)
and Scher (1997) undergraduate responses to an experiment on body Gender: 2 = .00
image and female attractiveness Substantive conclusions identical
Pasveer and Obtained two samples (n = 429, n = 1,657) of Internet respondents Age: d = .51 (Internet: Older)
Ellard (1998) and two samples of paper-and-pencil respondents Gender: 2 = .01 to 2 = .05 (Internet: More males)
(n = 760, n = 148) to a personality inventory Marital status: 2 = .29 (Internet: > % married)
Mean difference (d = .16) on one study variable
Smith and Leigh (1997) Compared (n = 72) Internet respondents with (n = 56) paper-and-pencil Age: 2 = .12 (Internet: Older)
undergraduate respondents in a survey study of sexual fantasies Gender 2 = .29 (Internet: More males)
Dependent variables: d = .02 to d = .47
Stanton (1998a) Conducted two-group confirmatory factor analysis of survey results No differences in factor structure
on Internet sample (n = 50) and postal mail sample (n = 181) Age: d = .24 (Internet: Younger)
Gender: 2 = .06 (Internet: More males)
Hours/week: d = .52 (Internet: More)
Job tenure: d = .14 (Internet: Less)
Unionized: 2 = .10 (Internet: Fewer)
Hourly: 2 = .15 (Internet: Fewer)
Dependent variables: d = .47, d = .37
Yost and Homer (1998) Compared different conditions (paper only and paper with an option After controlling for job type of respondent because
for Web response) resulting in 1,090 paper-and-pencil of differential access to the Web, mean differences
versus 405 Web respondents to an attitude survey on 2 out of 13 items
Web responses were returned significantly faster than
paper-and-pencil responses
Web respondents provided significantly longer open-
ended comments

Note. Effect sizes calculated and presented following recommendations of Rosenthal (1991, pp. 15-25).
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 209

some instruments may vary based on respondents beliefs about anonymity or expec-
tations about the purpose of the test (e.g., Allred, 1986; Kantor, 1991; Lautenschlager
& Flaherty, 1990). There may also be deeper implications of anonymity rooted in par-
ticipants beliefs about the trustworthiness and reliability of computers and software.
For example, Kiesler and Sproull (1986) ascertained that networked participants
responded to a greater proportion of items, made longer open-ended responses, and
responded in a less socially desirable manner than those completing a pencil-and-
paper survey. Earlier work showed that people tended to be more self-absorbed and
less inhibited when communicating using a computer (Kiesler, Siegel, & McGuire,
1984; Sproull, 1985). Note that the novelty of a computer survey in the early 1980s
may have influenced individuals response styles. In contrast, the popularization of the
Internet has raised public awareness of privacy issues, and this could adversely affect
respondents openness when responding with a browser (GVU, 1998).
In contrast to concerns raised by the older research, recent research comparing
browser-based results with other modalities paints an optimistic picture. Table 1 sum-
marizes studies in which a direct or indirect comparison was accomplished between
samples of data collected through traditional means and networked samples. Most
studies showed minimal substantive differences in interpretation of the results
obtained from the different samples (cf. the many demographic differences). Where
mean differences did occur on substantive variables, they were generally insufficient
to influence the overall interpretation of results. Furthermore, of major concern for
correlational research, the factor structures of attitudinal measures appear to be invari-
ant across administration types (e.g., Buchanan & Smith, 1999; Stanton, 1998a). Of
course, invariant factor structure does not guarantee that two versions of an instrument
are equated with one another. Thus, we amplify the admonition offered above: When
research involves between-subjects comparisons of means, particularly with a sample
obtained at an earlier point in time or with a different mode of administration, we rec-
ommend that researchers obtain cross-validation evidence of their browser-based
results.
Uncontrolled response environments. Uncontrolled response environments can
negatively affect data generalizability in that each participant may respond to the Web
materials in a different context. Buchanan and Smith (1999) contrasted Internet
research strategies to research using other computerized platforms. In particular, they
highlighted the fact that researchers have no control over circumstances in which par-
ticipants complete research tasks. These circumstances include equipment and soft-
ware used by the participant and the environment in which that equipment exists.
Researchers cannot ascertain whether others are present when research materials are
completed, whether environmental distractions affected the research participant, or
whether software and hardware incompatibilities changed the appearance or behavior
of the stimuli. This latter point is particularly salient for tests that require control over
the timing of stimulus presentation or participant responses. Because connections
between servers and clients are asynchronous, such studies require specialized cli-
ent software to control timing of stimuli and to measure and/or control timing of partic-
ipant responses.
Importantly, the circumstances of research administration also include the state of
the respondent himself or herself. Although researchers working in the laboratory may
notice if participants are sleepy, intoxicated, or distressed, researchers conducting net-
210 ORGANIZATIONAL RESEARCH METHODS

worked research have no knowledge or control of the mental and physical state of
respondents (differing mental and physical status of respondents may introduce error
variance or confounding bias into the data). For research conducted within organiza-
tions, this offers no worse a challenge than standard mail-return survey practices.

Perceived generalizability. With online study results in hand, one of the most frus-
trating tasks for researchers may be convincing an audience that the data are represen-
tative of a relevant population and that the method has not otherwise skewed the
results. This lack of credibility may manifest for applied researchers in client or audi-
ence rejection of their results and for scientific researchers in difficulty publishing
their data. This credibility problem has two aspects: One aspect is real threats to
generalizability as discussed above. The other is simply the newness of the research
medium. Managers, editors, reviewers, and others who evaluate research may not have
had ample time or experience to develop reliable heuristics for judging the value of
networked organizational research. A set of studies conducted by Walsh (1998;
Ognianova, 1997; Walsh, McQuivey & Wakeman, 1999) confirmed that information
from the Internet is perceived as considerably less credible than traditional sources
(e.g., printed media). Walsh also documented that the perceived credibility of online
sources varies systematically with demographic variables of the audience. Age, gen-
der, frequency of Web usage, and other demographics accounted for about 8% of the
variance in perceived credibility of sources in a student sample and 21% of the vari-
ance in an adult sample. In particular, older individuals and those with less Web experi-
ence were more inclined to disbelieve information obtained there.
One implication of these findings is that researchers collecting data over the
Internet (and even over intranets) may have to work extra hard, at least for now, to dem-
onstrate the rigor of their research methods. The obvious, though costly, way of
accomplishing this is to collect data from a cross-validation sample that is obtained
through more traditional research methods. Most of the studies described in Table 1
undertook this strategy with the specific intention of comparing Internet and tradi-
tional samples.

Ethical Implications of Online Research


Rosenthal (1994) suggested that validity and research ethics were inextricably
entwined because the quality and proper conduct of a study affect both issues. Thus,
we would be remiss if our discussion of browser-based research did not include a dis-
cussion of ethics. To protect the welfare of research participants, all researchers must
adhere to the ethical principles of their respective specializations (e.g., American Psy-
chological Association; see Lowman, 1998). Although this imperative applies regard-
less of the modality in which research is conducted, browser-based research intro-
duces new ethical challenges for researchers.

Ethics Overview

The principal, overarching ethical challenge facing networked researchers con-


cerns participant privacy and anonymity. Unfortunately, in networked research,
regardless of the technological sophistication, it is very difficult to guarantee the ano-
nymity of respondents. Cho and LaRose (1999) gave a comprehensive discussion of
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 211

privacy concerns associated with soliciting participation in research studies over the
Internet, as well as concerns for anonymity and confidentiality. Data from Bartel-
Sheehan and Grubbs-Hoy (1999) strongly suggested that privacy concerns could sub-
stantially and negatively influence response rates to online research. Cho and LaRose
(1999) and Stanton (1998b) provided reminders that the collection of passwords, IP
addresses, and other unique identifiers may affect the degree to which respondents
trust the anonymity of the research. These techniques clearly place an extra burden on
the researcher to maintain the secrecy of the identification data provided.
Schmidt (2000) provided technical information on threats to data security for stud-
ies conducted over the Internet. In brief, unless the researchers Web server and the
participants Web client use Secure Sockets Layer (SSL) or another interface to pro-
vide secure client-server connections, there is no way to ensure that the participants
responses cannot be accessed by a third party. Therefore, in seeking informed consent
from participants, it is incumbent on the researcher to indicate that although every
effort will be made to secure responses, the Internet/intranet is too uncontrolled to be
able to make perfect assurances of anonymity or confidentiality. Even with client-
server security, depending on how the participants computer is configured, it still may
be the case that participant responses could be inadvertently cached on the client com-
puter. Consequently, participant responses could be purposely or accidentally viewed
by others (this problem is exacerbated when computers are shared). B. Thomas et al.
(2000) and others reviewed the importance of security protection for researchers Web
sites to avoid unintentional release of data and misuse of computer resources. We
obtained a relevant case study through personal communications, in which we ascer-
tained that a researcher had distributed survey materials to a large number of partici-
pants using a list processor or listproc. Unfortunately, the technology was incor-
rectly configured such that when participants responded, their completed surveys were
distributed to the entire list. This simple case illustrates that the amplifying power of
the technology and a researchers lack of mastery of that technology can enable seri-
ous breaches of anonymity and confidentiality.
Finally, it is worth mentioning that if the organizational research project involves
monitoring and analysis of interactions occurring in a listproc, discussion group, or
chat room, privacy and anonymity issues take on a complicated new dimension. The
principle of informed consent requires that in any research on identifiable individuals,
those individuals must give at least tacit approval to participation in the research
(Sieber, 1992). J. Thomas (1996a) reviewed a highly controversial case in which an
undergraduate researcher from Carnegie Mellon University published research con-
taining serious ethical violations involving unannounced researcher observation of
such interactions. Due to space limitations, we cannot provide full discussion of this
topic, but an entire issue of the journal, Information Society, was dedicated to analysis
of ethical issues pertaining to research in cyberspace (see J. Thomas, 1996b).

Informed Consent and Debriefing

Typically, organizational research projects avoid deception and are innocuous in


content. As a result, informed consent and debriefing are not always salient issues.
Passive consent through actual participation in the research is often taken as evidence
of consent (assuming the participant is not a minor). Occasionally, however, organiza-
tional research is sensitive in nature (e.g., sexual harassment) or may involve deception
212 ORGANIZATIONAL RESEARCH METHODS

(e.g., false feedback). Thus, it is important to understand options for consent and
debriefing in networked research.

Consent forms. Consent can serve as a gateway to the experimental research materi-
als such that the first element the prospective participant sees when accessing the
research materials is a consent form (Schmidt, 1997; Smith & Leigh, 1997). The par-
ticipant indicates consent by clicking a button and only then gains access to the study.
The challenge of informed consent in networked research stems from the word
informed. In networked research, it is unknown whether participants have read or truly
comprehended the information on the consent form prior to indicating agreement.
Moreover, if a prospective participant does not understand a part of the consent form,
there is no immediate way of addressing questions. Some mechanisms exist to allay
these concerns. First, to promote comprehension and avoid superficial reading of the
information contained on the consent form, a multimedia explanation (e.g., streaming
video) of the consent material can be provided. A second mechanism involves design-
ing quiz items that tap whether the prospective participant understood the informa-
tion presented on the consent form. Access to the research materials can be made con-
tingent on correct responses. This has obvious implications for response rates and
illustrates one of the ways in which ethical, technology, and validity issues are tightly
interconnected.
Debriefing. Organizational research occasionally involves mild forms of decep-
tion. Ethically, deception is deemed acceptable to the extent that it does not cause
harm, that it serves important research goals, and that alternative methods are unavail-
able. A condition of using deception, however, is that the researcher will conduct some
type of debriefing with participants after the study is completed. Debriefing in the vir-
tual world is not straightforward. For instance, the participant may easily leave a
research Web site prematurely without reading the debriefing. Alternatively, partici-
pants may not understand the debriefing, and yet the researcher is not present to clarify
any confusion. Relatedly and most important, the researcher is not present to deal with
adverse participant reactions. To address some of these concerns, we recommend the
following: As above, consider using multimedia or at the very least have the debriefing
statement open in a separate window. Second, contact information should be pro-
vided so participants can ask questions and share concerns. Finally, research that may
elicit strong emotional reactions should simply not be administered using networked
methods.

Reporting Results

One final unique ethical challenge to researchers who use networked data collec-
tion lies in the reporting of research results. We have previously discussed problems
with ascertaining response rates, assessing nonresponse biases, discarding responses
from nonsampled individuals, and discarding multiple responses from single individ-
uals. Each of these issues can affect the quality of the data reported and the validity of
the conclusions drawn from the results. It is the researchers responsibility to make a
complete report of anomalies encountered (and how they were resolved) throughout
data collection and analysis processes to facilitate judgments about data quality and
validity by reviewers and readers (Aguinis & Henle, in press).
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 213

Conclusion
Predictions always entail risk, but we feel confident in suggesting that use of net-
worked research methods will continue to burgeon for the foreseeable future. We have
reviewed many challenges that researchers currently face and provided some sugges-
tions for dealing with these. Before closing, we would like to broach some future needs
that we see as important for ensuring the continuing success of networked research
efforts.
Participants bill of rights. Some of the ethical issues we reviewed suggest that our
ability to use technology in the service of research has grown faster than our ability to
apply ethics to the resulting new scenarios. General principles for the treatment of
human subjects could be mapped specifically onto some of these new scenarios to cre-
ate a networked research participants bill of rights. This bill of rights would promote
norms concerning what online research participants can expect and demand from
researchers who seek their participation. These norms would pertain to the whole net-
worked research experiencefrom receiving a recruiting solicitation all the way to
obtaining reports of the research results.
SPAM-free recruitment protocol. On a more detailed level than a participants bill
of rights, researchers need specific, operational guidelines on how to use e-mail and
other electronic contact methods to reach potential research participants without
breaching privacy or violating the standard rules of netiquette (Davidson, 1999). For
example, one wise but relatively unknown rule is to always contact the owner of a
listproc individually and privately to ask permission before posting a recruiting mes-
sage to that listproc. Ideas such as these could be codified into a publicly available pro-
tocol for contacting participants electronically.
Study certification and registry. We have some concern that overuse of networked
research strategies may eventually result in substantial reductions in response rates,
more attempts to sabotage research projects, and lower quality responses. Cho and
LaRose (1999) underscored the idea that as the popularity of Internet data collection
increases, individuals may become more resistant to responding. Rogelberg and
Luong (1998) documented a similar phenomenon (for paper-and-pencil surveys)
within organizations and termed it oversurveying. Schwarz et al. (1998) indicated that
response rates across all types of studies have been declining over time. Together, these
pieces of evidence suggest that researchers must treat Internet research participants as
a finite and depletable resource. A constant barrage of e-mail solicitations to partici-
pate in Internet research seems an assured way of angering potential research partici-
pants and discouraging their participation. In addition, the ease with which different
stakeholders within an organization can create and field an intranet survey raises a sim-
ilar concern for intranet-based research.
At present, several clearinghouse sites exist where researchers can register and pub-
licize their projects (see Azar, 2000). Thinking more broadly, the potential for overuse
of participant resources argues for some type of networked research consortium. Such
a consortium could provide a similar project registry function, but this could be paired
with guidelines and assistance to help researchers avoid alienating potential research
participants.
214 ORGANIZATIONAL RESEARCH METHODS

Professional respondent panels. In market research and other areas of social sci-
ence, researchers often assemble a working panel of respondents who typically
respond to research stimuli in exchange for modest stipends or perquisites. When
thoughtfully realized, these panels can be composed of highly representative individu-
als whose responses collectively provide similar or superior generalizability to
nonprobability samples of volunteers. Such professional panels also have the potential
to mitigate the oversurveying problems mentioned above.

More empirical research on networked methodologies. A great deal of method-


ological research is needed to truly understand how to design and interpret data col-
lected from the Internet and intranets. First, more research is needed on differences in
responses and respondents obtained by different sampling/recruiting techniques, par-
ticularly as access to the Internet and intranets continues to grow. Second, the base rate
of malicious multiple responding is unknown. An experiment could be devised to
explore the effects of the recruiting method and the content of the research instrument
on the likelihood of multiple response. Third, statistical explorations are needed to
ascertain the influence of multiple responding on data quality. In addition, techniques
such as classification analysis could be applied to networked data in the service of
detecting response patterns that signify multiple response. Fourth, numerous ques-
tions remain in the area of nonresponse. Do passwords for access control decrease par-
ticipants sense of anonymity? Do response rates vary with the type of access control
used? The use of complex access control or authentication strategies may inhibit
responding because of the extra time and effort respondents must spend gaining access
to research materials. Finally, an additional avenue of inquiry concerns the perceived
validity (managers, editors, reviewers, and others) of organizational data collected via
the Internet and intranet.
A plethora of additional opportunities exists for methodological explorations of
networked research. With structures in place to support ethical networked research
practices and additional methodological research to understand and address validity
concerns, networked research may well become the method of choice for organiza-
tional researchers in the near future.

References
Abbate, J. (1999). Inventing the Internet. Cambridge, MA: MIT Press.
Aguinis, H., & Henle, C. A. (in press). Ethics in research. In S. G. Rogelberg (Ed.), Handbook of
research methods in industrial and organizational psychology. Malden, MA: Blackwell.
Allred, L. J. (1986). Sources of non-equivalence between computerized and conventional psy-
chological tests. Unpublished doctoral dissertation, Johns Hopkins University.
Anderson, S. E., & Gansneder, B. M. (1995). Using electronic mail surveys and computer-
monitored data for studying computer-mediated communication systems. Social Science
Computer Review, 13, 33-46.
Atchison, C. (1998). Men who buy sex: A preliminary description based on the results from a
survey of the Internet-using population. Unpublished doctoral dissertation, Simon Fraser
University, Burnaby, Canada.
Azar, B. (2000). A Web experiment sampler. Monitor on Psychology, 31(4), 46-47.
Bartel-Sheehan, K., & Grubbs-Hoy, M. (1999). Flaming, complaining, abstaining: How online
users respond to privacy concerns. Journal of Advertising, 28(3), 37-51.
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 215

Bertot, J. C., & McClure, C. R. (1996). Electronic surveys: Methodological implications for us-
ing the World Wide Web to collect survey data. Proceedings of the ASIS Annual Meeting,
33, 173-185.
Birnbaum, M. H. (1999). Testing critical properties of decision making on the Internet. Psycho-
logical Science, 10, 399-407.
Bosnjak, M. (2000, May). Participation in non-restricted Web surveys: A typology and explana-
tory model for non-response. Paper presented at the 55th annual conference of the Ameri-
can Association for Public Opinion Research, Portland, OR.
Buchanan, T., & Smith, J. L. (1999). Using the Internet for psychological research: Personality
testing on the World Wide Web. British Journal of Psychology, 90, 125-144.
Cho, H., & LaRose, R. (1999). Privacy issues in Internet surveys. Social Science Computer Re-
view, 17, 421-434.
Coomber, R. (1997). Using the Internet for survey research. Sociological Research Online, 2(2)
[Online]. Available: http://www.socresonline.org.uk/2/2/2.html
Davidson, S. (1999). From spam to stern: Advertising law and the Internet. In D. W. Schumann
& E. Thorson (Eds.), Advertising and the World Wide Web (pp. 233-263). Mahwah, NJ:
Lawrence Erlbaum.
Gilmore, C., Kormann, D., & Rubin, A. D. (1999). Secure remote access to an internal Web
server. IEEE Network, 13(6), 31-37.
Graphics and Visualization Unit, Georgia Institute of Technology. (1998). 10th WWW User Sur-
vey [Online]. Available: http://www.cc.gatech.edu/gvu/user_surveys/survey-1998-10/
Jones, S. G. (Ed.). (1999). Doing Internet research: Critical issues and methods for examining
the Net. Thousand Oaks, CA: Sage.
Kantor, J. (1991). The effects of computer administration and identification on the Job Descrip-
tive Index (JDI). Journal of Business & Psychology, 5, 309-323.
Kaye, B. K., & Johnson, T. J. (1999). Taming the cyber frontier: Techniques for improving on-
line surveys. Social Science Computer Review, 17, 323-337.
Kiesler, S., Siegel, J., & McGuire, T. (1984). Social psychological effects of computer-mediated
communication. American Psychologist, 39, 1123-1134.
Kiesler, S., & Sproull, L. S. (1986). Response effects in the electronic survey. Public Opinion
Quarterly, 50, 402-413.
King, N. (2000). What are they thinking? PC Magazine, 19(4), 163-178.
Krantz, J. H., Ballard, J., & Scher, J. (1997). Comparing the results of laboratory and World-
Wide Web samples on determinants of female attractiveness. Behavior Research Methods,
Instruments, and Computers, 29, 264-269.
Krantz, J. H., & Dalal, R. (2000). Validity of Web-based psychological research. In M. H.
Birnbaum (Ed.), Psychological experimentation on the Internet (pp. 35-60). San Diego:
Academic Press.
Lautenschlager, G. J., & Flaherty, V. L. (1990). Computer administration of questions: More de-
sirable or more social desirability? Journal of Applied Psychology, 75, 310-314.
Lowman, R. L. (Ed.). (1998). The ethical practice of psychology in organizations. Washington,
DC: American Psychological Association.
Luong, A., & Rogelberg, S. G. (1998). How to increase your survey response rate. The
Industrial-Organizational Psychologist, 36(1), 61-65.
McBrien, B. D. (1984, November). The role of the personal computer in data collection, data
analysis, and data presentation: A case study. Paper presented at the ESOMAR Confer-
ence, Nice, France.
Michalak, E. E., & Szabo, A. (1998). Guidelines for Internet research: An update. European
Psychologist, 3(1), 70-75.
Morrow, R. H., & McKee, A. J. (1998). CGI scripts: A strategy for between-subjects experimen-
tal group assignment on the World-Wide Web. Behavior Research Methods, Instruments,
and Computers, 30, 306-308.
216 ORGANIZATIONAL RESEARCH METHODS

Musch, J., & Reips, U.-D. (2000). A brief history of Web experimenting. In M. H. Birnbaum
(Ed.), Psychological experimentation on the Internet (pp. 61-87). San Diego, CA: Aca-
demic Press.
Nielsen Netratings. (2000). Internet year 1999 in review [Online]. Available: http://www.
nielsen-netratings.com/press_releases/pr_000120_review.htm
Oakes, W. (1972). External validity and the use of real people as subjects. American Psycholo-
gist, 27, 959-962.
Ognianova, E. V. (1997). Audience processing of news and advertising in computer-mediated
environments: Effects of the content providers perceived credibility and identity. Unpub-
lished doctoral dissertation, University of Missouri, Columbia.
Oliver, D. (1999). Sams teach yourself HTML 4 in 24 hours (4th ed.). Indianapolis, IN: Sams.
Pasveer, K. A., & Ellard, J. H. (1998). The making of a personality inventory: Help from the
WWW. Behavior Research Methods, Instruments, and Computers, 30, 309-313.
Pettit, F. A. (1999). Exploring the use of the World Wide Web as a psychology data collection
tool. Computers in Human Behavior, 15, 67-71.
Rogelberg, S. G., & Luong, A. (1998). Non-response to mailed surveys: A review and guide.
Current Directions in Psychological Science, 7, 60-65.
Rogelberg, S. G., Luong, A., Sederburg, M. E., & Cristol, D. S. (2000). Employee attitude sur-
veys: Exploring the attitudes of noncompliant employees. Journal of Applied Psychology,
85, 284-293.
Rosenthal, R. (1991). Meta-analytic procedures for social research. Newbury Park, CA: Sage.
Rosenthal, R. (1994). Science and ethics in conducting, analyzing, and reporting psychological
research. Psychological Science, 5, 127-134.
Schaefer, D. R., & Dillman, D. A. (1998). Development of a standard email methodology: Re-
sults of an experiment. Public Opinion Quarterly, 62, 378-397.
Schillewaert, N., Langerak, F., & Duhamel, T. (1998). Non-probability sampling for WWW sur-
veys: A comparison of methods. Journal of the Market Research Society, 40, 307-323.
Schmidt, W. C. (1997). World-Wide Web survey research: Benefits, potential problems, and so-
lutions. Behavior Research Methods, Instruments, & Computers, 29, 274-279.
Schmidt, W. C. (2000). The server side of psychology Web experiments. In M. H. Birnbaum
(Ed.), Psychological experimentation on the Internet (pp. 285-310). San Diego, CA: Aca-
demic Press.
Schmidt, W. C., Hoffman, R., & MacDonald, J. (1997). Operate your own World-Wide Web
server. Behavior Research Methods, Instruments, & Computers, 29, 189-193.
Schwarz, N., Groves, R. M., & Schuman, H. (1998). Survey methods. In D. T. Gilbert, S. T.
Fiske, & G. Lindzey (Eds.), The handbook of social psychology (Vol. 2, 4th ed., pp. 143-
179). Boston: McGraw-Hill.
Sieber, J. E. (1992). Planning ethically responsible research: A guide for students and internal
review boards. Newbury Park, CA: Sage.
Simsek, Z., & Veiga, J. F. (2000). The electronic survey technique: An integration and assess-
ment. Organizational Research Methods, 3, 92-114.
Smart, R. (1966). Subject selection bias in psychological research. Canadian Psychologist, 7,
115-121.
Smith, M. A., & Leigh, B. (1997). Virtual subjects: Using the Internet as an alternative source of
subjects and research environment. Behavior Research Methods, Instruments, & Com-
puters, 29, 496-505.
Sproull, L. S. (1985). Using electronic mail for data collection in organizational research. Acad-
emy of Management Journal, 29, 159-169.
Stanton, J. M. (1998a). An empirical assessment of data collection using the Internet. Personnel
Psychology, 51, 709-725.
Stanton, J. M. (1998b). Validity and related issues in Web-based hiring. The Industrial-
Organizational Psychologist, 36(3), 69-77.
Stanton, Rogelberg / INTERNET/INTRANET WEB PAGES 217

Supovitz, J. A. (1999). Surveying through cyberspace. American Journal of Evaluation, 20,


251-263.
Swoboda, W. J., Muhlberger, N., Weitkunat, R., & Schneeweiss, S. (1997). Internet surveys by
direct mailing: An innovative way of collecting data. Social Science Computer Review, 15,
242-255.
Teo, T.S.H., Lim, V.K.G., & Lai, R.Y.C. (1997). Users and uses of the Internet: The case of Sin-
gapore. International Journal of Information Management, 17, 325-336.
Thomas, B., Stamler, L. L., Lafreniere, K., & Dumala, D. (2000). The Internet: An effective tool
for nursing research with women. Computers in Nursing, 18, 13-18.
Thomas, J. (1996a). When cyber-research goes awry: The ethics of the Rimm cyberporn
study. Information Society, 12(2), 189-197.
Thomas, J. (1996b). Introduction: A debate about the ethics of fair practices for collecting social
science data in cyberspace. Information Society, 12(2), 107-117.
Tse, A.C.B. (1998). Comparing the response rate, response speed and response quality of two
methods of sending questionnaires: E-mail vs. mail. Journal of the Market Research Soci-
ety, 40, 353-361.
Walsh, E. O. (1998, August). The value of the journalistic identity on the World Wide Web. Paper
presented at the conference of the Association for Education in Journalism and Mass Com-
munication, Baltimore.
Walsh, E. O., McQuivey, J. L., & Wakeman, M. (1999, November). Consumers barely trust Net
advertising. Cambridge, MA: Forrester Research.
Witmer, D. F., Colman, R. W., & Katzman, S. L. (1999). From paper-and-pencil to screen- and-
keyboard: Toward a methodology for survey research on the Internet. In S. G. Jones (Ed.),
Doing Internet research: Critical issues and methods for examining the Net (pp. 145-162).
Thousand Oaks, CA: Sage.
Yost, P. R., & Homer, L. E. (1998, April). Electronic versus paper surveys: Does the medium af-
fect the response? Paper presented at the annual meeting of the Society for Industrial and
Organizational Psychology, Dallas, TX.
Zickar, M. J. (2000). Modeling faking on personality tests. In D. R. Ilgen & C. L. Hulin (Eds.),
Computational modeling of behavior in organizations: The third scientific discipline
(pp. 95-113). Washington, DC: American Psychological Association.

Jeffrey M. Stanton is an assistant professor at Syracuse Universitys School of Information Studies, where he
conducts research on the organizational impacts of information technology. He is the director of the Syra-
cuse Information Systems Evaluation project. He holds Ph.D. and M.A. degrees in industrial-organizational
psychology and a B.A. in computer science.

Steven G. Rogelberg is an associate professor of psychology and director of the Institute for Psychological
Research and Application at Bowling Green State University of Ohio. His research focuses on improving or-
ganizational research methods and understanding and mitigating teamwork failures. He holds Ph.D. and
M.A. degrees in industrial-organizational psychology and a B.S. in psychology.

S-ar putea să vă placă și