Sunteți pe pagina 1din 11

4772 Walnut Street, Suite 206 Boulder, Colorado 80301 educause.

edu/ecar
Assessing What Faculty,
Students, and Staff Expect from
Information Technology
Organizations in Higher Education
Timothy M. Chester, Pepperdine University
ECAR Research Bulletin 18, 2010


2
Over vi ew
As the fundamental importance of information technology (IT) has been recognized in
higher education, IT leaders often find themselves in new and challenging situations.
During the institutions reaffirmation of accreditation, the visiting accreditation
team notes that IT compares well to its peers in terms of services, but those
services by themselves do not demonstrate the effectiveness of technology
across the institution. As an IT leader, how do you respond?
During the annual budget process, IT makes a request for significant new
resources. As a precursor to supporting new IT funding, the president and
provost ask for evidence that the funds granted last year were used effectively.
As an IT leader, how do you respond?
A faculty member sends an e-mail, copying the president and provost, protesting
poor IT service delivery in his college in broad and sweeping terms. The faculty
member concludes by noting, Every faculty member I know feels the same
way. As an IT leader, how do you respond?
Each of these challenges goes beyond the basics of delivering IT and speaks directly to
the challenge of delivering IT effectively. To respond successfully to the challenge of
accountability, IT organizations need evidence that demonstrates the value of IT
services to those outside the IT organization. Successful IT organizations are those that
are highly regarded for their use of an outcomes-based approach to assessment,
planning, and prioritization. However, the complexities of assessment are not a natural
competency for IT organizations.
IT organizations are expected to perform multiple roles across institutions. These include
the transactional service and order-taking roles (including delivery of basic computing
and collaboration services and services on demand); the advisor and consultant roles, in
which IT staff reflect with end users on opportunities, challenges, and threats to
business and academic operations; the role of embedded or functional IT support, where
IT staff proactively consider options and implement solutions; and finally, the role of
thought leader. This last role has evolved as IT leaders became chief information
officers, and it reflects recognition by presidents, provosts, and chief financial officers
that the voice of technology advocacy should be represented at the leadership table.
While most IT organizations understand these different roles, the relationship between
differing role expectations and credible role performance is often misunderstood. In the
transactional role, successful performance is based on the belief that transactional
services are reliable, consistent, efficient, and responsive to end-user needs. When
performing the consultative and advisory roles, successful performance is based on
demonstrating business smarts and analytical capabilities and understanding business
and information architectures. In the thought leader role, successful performance is
defined in terms of building effective partnerships and demonstrating change advocacy.
Progressive role performance is foundational in the sense that credible performance in
the thought leader, advisory, and consultative roles depends entirely on successful
performance in the transactional role. For example, no IT organization can credibly

3
perform the role of thought leader long-term if there are basic questions about its ability
to provide consistent, reliable, and responsive transactional services.
For IT organizations, demonstrating the effective delivery of technology services is vital
to building appreciation, respect, and trustworthinessotherwise known as credibility.
Because most of the work an IT department does remains in a black box to those
outside IT, the credibility of the IT organization is vital to securing acceptance, support,
autonomy, and adequate budgetary resources. Credibility allows IT organizations to
perform duties beyond the order-taking role. The question this bulletin seeks to answer
is how IT organizations can demonstrate the effective delivery of technology services in
a way that builds and sustains credibility.
Far too many IT organizations rely on credibility either derived from authority or accrued
through goodwill. This results in a weak foundation for successful role performance in
the consultative, advisory, and thought leader roles. Credibility derived from the
organization chart is not sustainable when detached from successful performance
providing transactional services. When this detachment persists, positional credibility
erodes and the IT organization experiences increasing levels of resistance, limiting its
effectiveness. To counter that resistance, IT leaders often turn to goodwill as a basis for
credibility. Because accruing goodwill often requires saying yes when saying no is
more prudent, this can result in a cycle of over-commitment and under-performance that
also limits effectiveness. At best, the delivery of IT services is inconsistent, less
responsive, more reactive, and more costly. At worst, the cycle of over-commitment and
under-performance results in a death spiral, eventually leading to radical overhauls of
both IT leadership and the IT organization. EDUCAUSE Review recently published a
synopsis of this type of situation, written from a provosts point of view.
1

There is a third basis for credibility that correlates highly with sustainable forms of
appreciation, respect, and trustworthiness. When successful outcomes are
demonstrated through a regular, recurring cycle of assessment, planning, and
prioritization, IT can establish a credible foundation that supports successful
performance beyond the transactional, order-taking role. The most crucial inputs into this
planning cycle are valid and reliable measures indicating the effectiveness of technology
services.
Most IT organizations rely on institution-specific surveys to generate this type of
evidence. Existing sources of peer data, such as the EDUCAUSE Core Data Service, do
not speak to outcomes. Because of the diversity of IT services and of the ways in which
IT services are delivered across different types of institutions, the challenge of creating a
single approach is daunting. Despite this difficulty, there are several attempts under way
to create standardized performance measures for use by multiple institutions.
2

One of these attempts is the Higher Education TechQual+Project, established in 2007.
The goal for this project is to produce the following:
Measures that conceptualize the effective delivery and use of technology in such
a way that it can be practically measured, or operationalized, from the point of
view of individuals outside the IT organization who depend on IT services.

4
A set of easy-to-use web-based tools that allows institutions to create surveys
based on the TechQual+instrument, communicate to respondents, and analyze
results.
A peer database that allows institutions to make comparisons of their
performance against the performance of other institutions, aggregated by
Carnegie basic classification.
What distinguishes TechQual+from other efforts at standardization is its focus on
defining effective outcomes from an end-user point of view. This end-user-centered
approach should not be confused as an attempt to gauge customer satisfaction. What IT
organizations refer to as customer satisfaction is typically thought of as effectiveness
by users outside the IT organization.
This research bulletin covers the TechQual+approach to assessing what faculty,
students, and staff expect from IT organizations in higher education. The theoretical
approach to TechQual+is presented, as are the key data indicators generated through a
TechQual+survey and the future plans for the refinement of the approach. The use of
the TechQual+instrument and tools are demonstrated with examples from two
universities.
Hi ghl i ght s
The Higher Education TechQual+Project is inspired by the groundbreaking research
that resulted in LibQual+, an outcomes-based approach for assessing the quality of
library services. Supported through the Association of Research Libraries (ARL),
LibQual+is annually administered at over 1,000 institutions and has been translated into
multiple languages for use by international institutions. Data collected through LibQual+
is designed to help libraries improve services by aligning them with the expectations of
the communities they serve. In many regards, LibQual+served as an agent of change
as libraries evolved from static, physical repositories to dynamic places for collaboration.
LibQual+provides a core 22-item instrument that measures end-user evaluations of their
library experiences and provides a set of easy-to-use web-based tools for creating and
conducting LibQual+assessments. It should be noted that the significant momentum
behind LibQual+is due in part to the fact that most professional librarians also hold
faculty appointments. Rigorous assessment and planning are firmly entrenched in their
culture of practice.
Under st andi ng Tec hQual + Sur vey Resul t s
Both LibQual+and TechQual+are based on an approach to assessing service quality
first articulated as SERVQUAL.
3
This approach to understanding service quality is based
on assessing three different measures for every dimension of service.
Minimum Expectations represents the minimum level of service that a
respondent finds acceptable.
Desired Expectations represents the level of service that a respondent really
wants.

5
Perceived Performance represents the level of service that is typically provided,
relative to both minimum and desired expectations.
For example, item #3 on the 2010 TechQual+core instrument reads, When it comes to
having wireless Internet coverage in all the areas that are important to me as a faculty,
student, or staff. Survey respondents are asked to rate their minimum expectations,
their desired expectations, and their performance evaluation using a 1 to 9 scale for
each rating.
When analyzing the results, evaluations of perceived performance are best understood
by comparing them to both minimum and desired expectations. The range between
minimum and desired expectations constitutes a Zone of Tolerance that should be
understood as the range of possible service outcomes that respondents find acceptable.
Should the perceived performance ranking fall below the Zone of Tolerance, this
indicates performance that is below minimum expectations. Should the perceived
performance lie above the Zone of Tolerance, this indicates performance that exceeds
desired expectations. The literature on the Zone of Tolerance concept suggests that end
users find performance adequate when it lies within the general range between their
minimum and desired expectations.
4

In addition to the Zone of Tolerance, two other concepts are crucial to TechQual+. The
Adequacy Gap Score is computed by subtracting the minimum expectation rating from
the perceived performance rating. A positive number indicates the degree to which
service performance exceeds respondents minimum expectations. A negative number
indicates the degree to which service performance is below minimum expectations. The
Superiority Gap Score is computed by subtracting the desired expectation rating from
the perceived performance rating. A positive number indicates the degree to which
service performance exceeds desired expectations. A negative number indicates the
degree to which service performance is below desired expectations.
Below is a partial results table from a TechQual+survey.
5

Table 1. 2010 TechQual+ Student Survey Results, Pepperdine Uni versity
(Items #3 and #5 onl y)
Item # Item
Min.
Expect.
Desired
Expect.
Perceived
Perform.
Adequacy
Gap Score
Superiority
Gap Score
n
3
When it comes to wireless
network coverage in all the
areas that are important to
me as a faculty, student, or
staff member
7.04 8.62 7.45 0.42 -1.17 406
5
When it comes to having
access to important
university-provided
technology services from
my mobile device
5.77 7.63 6.47 0.70 -1.16 285


6
When analyzing these results, the following can be observed.
1. The Zone of Tolerance for wireless network coverage is between 7.04 and 8.62,
on a scale of 1 to 9.
2. The Adequacy Gap Score for wireless network coverage is positive (0.42),
indicating performance above minimum expectations. The Superiority Gap
Score for wireless network coverage is negative (-1.17), indicating performance
below desired expectations. Thus, performance for wireless network coverage is
within the Zone of Tolerance, indicating satisfactory performance in the eyes of
respondents.
3. The Zone of Tolerance for mobile device access is between 5.77 and 7.63, on a
scale of 1 to 9.
4. The Adequacy Gap Score for mobile device access is positive (0.70), indicating
performance above minimum expectations. The Superiority Gap Score for
wireless network coverage is negative (-1.16), indicating performance below
desired expectations. Thus, performance for mobile device access is within the
Zone of Tolerance, indicating satisfactory performance in the eyes of
respondents.
Another distinguishing characteristic of the TechQual+approach is that it provides
indirect evidence of respondents priorities. By comparing the Zone of Tolerance across
items, one can observe different levels of expectations. For example, the results above
show that item #5 has a lower Zone of Tolerance than item #3, suggesting that mobile
device support is a lower priority for respondents than wireless network coverage. End
users typically have higher expectations for areas when they are more important to them.
These quantitative results become even more meaningful because of two other
distinguishing features of the TechQual+approach. First, survey administrators are
allowed to include descriptive attributes for each respondent. These attributes could
include items such as role (faculty, student, and staff), college or school affiliation,
campus, department, gender, age, etc. The TechQual+website allows the inclusion of
up to 10 descriptive attributes for each respondent. Survey results can be filtered based
on these attributes. Second, when a respondent indicates that perceived service
performance is equal to or lower than their minimum expectation, they are prompted to
provide suggestions. These free-form comments can be analyzed further in order to
contextualize the raw scores and turn them into actionable insights.
For example, at Pepperdine University the results of the 2008 annual TechQual+
assessment showed satisfactory Adequacy Gap Scores for wireless network coverage
across all students. However, when filtering the results by school, the results suggested
dissatisfaction with wireless network coverage among undergraduate students.
Analyzing the free-form suggestions for this service dimension revealed that lack of
wireless coverage in the dormitories was the cause of the poor Adequacy Gap Scores.
Based on this data, expanding wireless network coverage to the dormitories became a
higher priority for the IT organization, and this data was used to support a budget
request for this initiative.

7
Devel opment of t he Tec hQual + Cor e I nst r ument
The core TechQual+instrument includes 18 items that are designed to capture users
evaluations of technology services at their institutions. In specifying these core items,
TechQual+articulates a general approach to conceptualizing the expectations of faculty,
students, and staff. This may be counterintuitive to some, given the diversity of
institutions and the myriad ways in which IT services are organized within them.
However, though TechQual+items are couched in general terms, by filtering based on
respondent attributes and analyzing free-form suggestions, one can easily turn
TechQual+results into institution-specific action items.
In formulating the TechQual+core instrument, a classical social scientific approach has
been followed. Project investigators have relied on focus groups at participating
institutions to ascertain core IT commitments expected by faculty, students, and staff.
This approach utilizes the Naturalistic Inquiry approach to qualitative research, where
investigators rely on unstructured observations and conversations in order to formulate
general themes from unique and complex subject matter.
6
To date, project investigators
have conducted over 40 hours of focus groups at three institutions: a large, state-
supported research-extensive institution (University of New Mexico); a small, regional
liberal arts college (Abilene Christian University); and a highly selective, research-
extensive private institution (Boston University).
While there is incredible diversity across these institutions in terms of types of
technology services and service delivery models, the TechQual+investigators have
found remarkable consistency in terms of the core commitments expected of IT
organizations. These expectations hold up whether one is discussing IT expectations
with an engineering professor at Boston University, a student at Abilene Christian
University, or a staff member at the University of New Mexico. These three core
expectations are:
Connectivity and Access: Measures the service quality of network access and
the ability to access online services
Technology and Technology Services: Measures the service quality of core
technology services, particularly online technology services
The End User Experience: Measures the service quality of training and
technology support services
Each of these core commitments is assessed through six separate items, or service
dimensions, on the TechQual+core instrument. These service dimensions are designed
to reflect the more specific expectations end users have for the core commitment.
When determining the perceived performance of any service dimension, the results from
focus groups suggests that faculty, students, and staff subjectively rely on one or more
of the following yardsticks when evaluating technology services:
Consistency: Is the service provided to end users consistently independent of
place, time, or individual service providers?

8
Communication: Is there adequate and proactive communication about the
service, and is that communication intelligible to individuals outside the IT
organization?
Collaboration: Does proficient use of the technology service effectively increase
collaboration opportunities with others across the institution?
In general, when evaluating a specific technology service, end users tend to make a
positive evaluation of that service when it is delivered consistently, when communication
regarding the service is proactive and intelligible, and when the service increases
collaboration opportunities with others.
The purpose of TechQual+focus groups to date has been the identification of the three
core comments and the evaluative yardsticks used by end users when assessing
service quality. There is much more work to be done. Future focus groups will be
directed at the specification of more precise service dimensions (or survey items) that
align with each core commitment. Once this work is accomplished, a new TechQual+
core instrument will be released, most likely in the spring of 2011.
To assist institutions with administering TechQual+surveys, the project also provides
web-based tools that make it easier to create TechQual+surveys, communicate with
respondents, and analyze results. The site provides graphs and reports that are suitable
for a variety of audiences, from faculty and students to campus leaders. TechQual+
surveys can also include custom, institution-specific service dimensions and open-ended
questions. The TechQual+surveys are hosted on enterprise-grade infrastructure at
Pepperdine University, providing hosting services that will scale to the largest of
institutions.
The TechQual+approach to assessing service quality is applicable for institutions of all
shapes and sizes. Smaller institutions, where IT is often mostly centralized, can use
TechQual+to ascertain the strengths and weaknesses of technology services and align
their organizational priorities with those of their end user community. Larger institutions
with decentralized services can also filter TechQual+results to assess the strengths and
weaknesses of services across decentralized units. Such data is often helpful in
determining best practices or planning for service consolidation.
Demonst r at i ng Ef f ec t i veness w i t h Tec hQual +
At Pepperdine University, TechQual+assessment data has been used to raise the
credibility of the IT organization in a way that improves morale and increases institutional
support for key technology initiatives. Upon arriving in 2007, the first thing the new CIO
did was to ask all IT staff members to complete a TechQual+survey and assess the
strength of services provided by their organization. The results were dismal, reflecting
significant issues with morale. Six months later, staff perceptions of service quality were
compared with student perceptions of service quality. Not surprisingly, students had a
much more positive perception of service quality than the IT staff, and this comparison
helped to shore up morale within the IT department. The next year, the results of the
student TechQual+survey were used to support a million-dollar budget request to install
wireless network capabilities in the dormitories. Once that project was completed, the
student TechQual+survey for the next year showed dramatic improvement in the

9
perceived performance of this service dimension. By illustrating the positive results
stemming from previous investments in IT, the IT organization has established new
credibility that has been helpful, as it has sought to increase the effective delivery and
use of technology across the institution.
Furman University has administered the TechQual+survey annually since 2008. Furman
uses the TechQual+data to raise campus awareness of efforts to improve technology
services on campus. It has provided a framework for discussing strategic priorities for
technology services and support budget requests. Results from annual TechQual+
assessments are posted on the IT departments website, discussed at faculty meetings,
and presented to the presidents cabinet. The results have demonstrated the need for
improved wireless access service for students and shown that most faculty members
were very unhappy with the quality of technology services. This data was incorporated
into the CIOs annual planning efforts to great effect. Subsequent use of the TechQual+
instrument has shown the positive effects of those planning efforts, as perceptions from
both faculty and students have improved over time. By administering the TechQual+
instrument annually, the IT leadership team at Furman University is able to identify
trends and take advantage of free-form comments and suggestions to turn end-user
perceptions into an institution-specific agenda for action.
This bulletin began with a description of three different situations commonly encountered
by IT leaders in higher education. With end-user-focused data in hand, one can easily
understand failures in service delivery as one-time mistakes, as opposed to urban myths
of recurring problems in IT. Good data also allows IT leaders to respond to the requests
of both administrators and accreditation bodies, which increasingly request evidence of
successful outcomes in this era of accountability. With TechQual+, IT organizations can
compile the evidence that helps them respond to these critical challenges. Assessment,
planning, prioritization, and accountability are the processes that increase the effective
delivery and use of technology.
What I t Means t o Hi gher Educ at i on
There are three broader effects of the Higher Education TechQual+Project for IT
organizations in higher education.
First, as higher education comes under more scrutiny, it is vital that the effectiveness of
our efforts be demonstrated through an evidence-based approach that is centered on
outcomes. TechQual+responds to this challenge by providing IT leaders with a set of
easy-to-use tools that will assess the strength of technology services from the
perspective of those outside the IT organization who depend on IT services. The
TechQual+model provides both quantitative data, indicating the strength of IT services,
and a way to infer the priorities of the end-user community. Setting priorities is crucial to
an IT organizations ability to deliver quality services.
Second, because of its filtering and suggestion-gathering capabilities, TechQual+also
provides data that can be used to create an institution-specific agenda for action that is
closely aligned with the expectations of the end-user community. When used as a basis
for a regular cycle of assessment, planning, prioritization, and accountability, TechQual+

10
will help build a sustainable form of credibility that allows an IT organization to increase
the effectiveness of its work.
Finally, the TechQual+approach conceptualizes the strength of technology services
from the perspective of those outside the IT organization. This makes the approach
unique among other approaches to standardize performance indicators for IT
organizations. Because it is based on a social scientific approach, the core commitments
forming the TechQual+instrument speak to the expectations of faculty, students, and
staff, regardless of their institution. It is evident that those inside and outside IT
organizations see the world differently. TechQual+helps to close the perception gap,
which is critical if we are to increase the effective delivery and use of technology in
higher education.
Key Quest i ons t o Ask
What are the differences between inputs-based and outcomes-based
approaches to assessing the quality of technology services?
How can IT leaders develop end-user-focused performance indicators that
demonstrate the strength and credibility of IT services at their institutions?
What are the core commitments expected of IT organizations by faculty,
students, and staff within higher education? How can IT leaders ascertain their
priorities?
How can IT organizations understand the differing perceptions of those who
deliver technology services and those who depend on technology services?
How can IT organizations use outcomes-based assessment, planning, and
accountability frameworks to respond to increasing demands for accountability
by both administrators and accreditation bodies?
Wher e t o Lear n Mor e
Cook, Colleen, Fred M. Heath, and Bruce Thompson. Zones of Tolerance in
Perceptions of Library Service Quality: A LibQual+Study. Libraries and the
Academy 3, no. 1, 2003, 113121.
Lincoln, Yvonna S., and Egon G. Gruba. Naturalistic Inquiry. Newbury Park, CA:
Sage Publications, 1985.
Parasuraman, A., Valarie A. Zeithaml, and Leonard L. Berry. A Conceptual
Model of Service Quality and its Implications for Future Research. Journal of
Marketing 49, Fall 1985, 4150.

11
Endnot es
1. David H. Farrar, Redefining IT Leadership: A Provosts Perspective, EDUCAUSE Review 45, no. 2
(March/April 2010), 6263, http://net.educause.edu/ir/library/pdf/ERM10210.pdf.
2. Both the EDUCAUSE IT Metrics Constituent Group and the Consortium for the Establishment of
Information Technology Performance Standards (CEITPS) are developing standards for outcomes-based
performance indicators for IT organizations.
3. A. Parasuraman, Valarie A. Zeithaml, and Leonard L. Berry, A Conceptual Model of Service Quality and
its Implications for Future Research, Journal of Marketing 49, Fall 1985, 4150, http://areas.kenan-
flagler.unc.edu/Marketing/FacultyStaff/zeithaml/Selected%20Publications/A%20Conceptual%20Model%2
0of%20Service%20Quality%20and%20Its%20Implications%20for%20Future%20Research.pdf.
4. Colleen Cook, Fred M. Heath, and Bruce Thompson, Zones of Tolerance in Perceptions of Library
Service Quality: A LibQual+Study, Libraries and the Academy 3, no. 1, 2003, 113121,
http://muse.jhu.edu/journals/portal_libraries_and_the_academy/v003/3.1cook.pdf.
5. The data in this chart comes from the Pepperdine University Spring 2010 TechQual+Assessment. Items
3 and 5, from the TechQual+core instrument, are used for illustration purposes.
6. Yvonna S. Lincoln and Egon G. Gruba, Naturalistic Inquiry (Newbury Park, CA: Sage Publications, 1985).
About t he Aut hor (s)
Timothy M. Chester (timothy.chester@pepperdine.edu) is Vice Provost for Academic
Administration and Chief Information Officer at Pepperdine University.















Ci t at i on f or Thi s Wor k
Chester, Timothy M. Assessing What Faculty, Students, and Staff Expect from Information Technology
Organizations in Higher Education (Research Bulletin 18, 2010). Boulder, CO: EDUCAUSE Center for Applied
Research, 2010, available from http://www.educause.edu/ecar.
Copyr i ght
Copyright 2010 EDUCAUSE and Timothy M. Chester. All rights reserved. This ECAR research bulletin is
proprietary and intended for use only by subscribers. Reproduction, or distribution of ECAR research bulletins
to those not formally affiliated with the subscribing organization, is strictly prohibited unless prior permission is
granted by EDUCAUSE and the author.

S-ar putea să vă placă și