Documente Academic
Documente Profesional
Documente Cultură
This study unit addresses the nature and modes of computer processing, the elements of the IT
function, and basic control concepts. It continues with a treatment of computer hardware and operating
systems. The concluding subunits concern various aspects of system security, including planning for
business continuity in the event of an interruption of computer processing.
Part II of the exam should have 5 to 7 IT questions. On Part II, The IIA tests IT at the awareness
level. However, the same IT topics are tested at the proficiency level on Part III. Thus, a candidate
sitting only for Part II need only be comfortable with the terms and concepts in Study Units 5 and 6 of
Part II. But a candidate who intends to take Parts II and III should understand that Study Units 5 and 6
in Part II and Study Units 8, 9, and 10 in Part III significantly overlap. In this scenario, studying Parts II
and III simultaneously is efficient.
Core Concepts
Characteristics such as the uniform processing of transactions and the loss of segregation of
functions distinguish computer-based from manual systems.
All information processing must be properly controlled. Controls over information systems can be
classified as general controls and application controls.
The earliest forms of computer processing were highly centralized. Increasingly, processing is
becoming decentralized.
With distributed processing, an organization determines which parts of an application are better
performed locally and which parts are better performed at some other, possibly centralized, site.
A computers hardware includes a central processing unit (CPU), data entry devices such as
keyboards and scanners, output devices such as monitors and printers, and storage devices such
as hard drives.
When an organization develops a new system, the life-cycle approach allows for enhanced
management and control of the process.
System security encompasses data integrity, access controls, application controls, systems
development standards, change controls, controls over end-user computing, and Internet
security.
A computer center should have a reconstruction and recovery plan that will allow it to regenerate
important programs and database files.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
140 SU 5: Information Technology I
5.1 INTRODUCTION TO IT
1. Characteristics. The use of computers in business information systems has fundamental
effects on the nature of business transacted, the procedures followed, the risks incurred,
and the methods of mitigating those risks. These effects flow from the characteristics that
distinguish computer-based from manual processing.
a. Transaction trails. A complete trail useful for audit and other purposes might exist for
only a short time or only in computer-readable form. The nature of the trail is often
dependent on the transaction processing mode, for example, whether transactions
are batched prior to processing or whether they are processed immediately as they
happen.
b. Uniform processing of transactions. Computer processing uniformly subjects like
transactions to the same processing instructions and thus virtually eliminates clerical
error, but programming errors (or other similar systematic errors in either the
hardware or software) will result in all like transactions being processed incorrectly
when they are processed under the same conditions.
c. Segregation of functions. Many controls once performed by separate individuals
may be concentrated in computer systems. Hence, an individual who has access to
the computer may perform incompatible functions. As a result, other controls may be
necessary to achieve the control objectives ordinarily accomplished by segregation of
functions.
d. Potential for errors and fraud. The potential for individuals, including those per-
forming control procedures, to gain unauthorized access to data, to alter data without
visible evidence, or to gain access (direct or indirect) to assets may be greater in
computer systems. Decreased human involvement in handling transactions can
reduce the potential for observing errors and fraud. Errors or fraud in the design or
changing of application programs can remain undetected for a long time.
e. Potential for increased management supervision. Computer systems offer
management many analytical tools for review and supervision of operations. These
additional controls may enhance internal control. For example, traditional
comparisons of actual and budgeted operating ratios and reconciliations of accounts
are often available for review on a more timely basis. Furthermore, some
programmed applications provide statistics regarding computer operations that may
be used to monitor actual processing.
f. Initiation or subsequent execution of transactions by computer. Certain
transactions may be automatically initiated or certain procedures required to execute
a transaction may be automatically performed by a computer system. The
authorization of these transactions or procedures may not be documented in the
same way as those in a manual system, and managements authorization may be
implicit in its acceptance of the design of the system.
g. Dependence of controls in other areas on controls over computer processing.
Computer processing may produce reports and other output that are used in
performing manual control procedures. The effectiveness of these controls can be
dependent on the effectiveness of controls over the completeness and accuracy of
computer processing. For example, the effectiveness of a manual review of a
computer-produced exception listing is dependent on the controls over the production
of the listing.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 141
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
142 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 143
e. Operators are responsible for the day-to-day functioning of the computer center,
whether the organization runs a mainframe, servers, or anything in between.
1) Operators load data, mount storage devices, and operate the equipment.
Operators should not be assigned programming duties or responsibility for
systems design. Accordingly, they also should have no opportunity to make
changes in programs and systems as they operate the equipment.
a)
Ideally, computer operators should not have programming knowledge or
access to documentation not strictly necessary for their work.
f. Help desks are usually a responsibility of computer operations because of the
operational nature of their functions. Help desk personnel log reported problems,
resolve minor problems, and forward more difficult problems to the appropriate
information systems resources, such as a technical support unit or vendor
assistance.
g. Network technicians maintain the bridges, hubs, routers, switches, cabling, and other
devices that interconnect the organizations computers. They are also responsible for
maintaining the organizations connection to other networks such as the Internet.
h. End users need access to applications data and functions only.
5. Data center operations may occur in a centralized data processing facility that is respon-
sible for the storage, management, and dissemination of data and information. A data
center may be either an internal department or a separate organization that specializes in
providing data services to others. A data center may operate in several possible modes.
a. Batch mode. Batch processing is the accumulation and grouping of transactions for
processing on a delayed basis. The batch approach is suitable for applications that
can be processed at intervals and involve large volumes of similar items, e.g., payroll,
sales, inventory, and billing.
1) Service bureaus perform batch processing for subscribers. This off-site mode
of processing requires a user to prepare input and then transmit it to the
bureau, with attendant increase in security problems. Employing a service
bureau is one means of outsourcing.
2) Hiring a facilities management organization is another. A facilities
management organization operates and manages an internal data processing
activity. It may manage hardware, software, system development, system
maintenance, and staffing. The facilities manager may own all of the hardware
and software and employ all the personnel.
b. Online mode. An online processing system is in direct communication with the
computer, giving it the capability to handle transactions as they are entered. An
online system permits both immediate posting (updating) and inquiry of master files
as transactions occur.
1) Real-time processing involves processing an input record and receiving the
output soon enough to affect a current decision-making process. In a real-time
system, the user interacts with the system to control an ongoing activity.
2) The term online, often used with real time, indicates that the decision maker is
in direct communication with the computer. Online, real-time systems usually
permit access to the main computer from multiple remote terminals.
c. Many applications use combined batch and online modes.
1) In such systems, users continuously enter transactions in online mode
throughout the workday, collecting them in batches. The computer can then
take advantage of the efficiencies of batch mode overnight when there are
fewer users logged on to the system.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
144 SU 5: Information Technology I
d. A timesharing system allows many users to have access through remote terminals to
a CPU (see item 4. in Subunit 5.2) owned by a vendor of computing services. The
CPU services them alternately. Timesharing differs from multiprogramming because
the CPU devotes a fixed time to each users program.
e. Totally centralized systems. All data processing and systems development are
done at one data processing center.
1) All processing is done by one large computer.
2) Remote users are serviced via data communications channels between
themselves and the center.
3) Terminals at the remote sites are usually dumb terminals (providing
communications only, with no stand-alone processing capabilities).
4) Requests for development of new systems are submitted for the consideration of
the centralized systems development group.
5) The centralized staff is large.
6) Advantages of total centralization arise primarily from (a) possible economies of
scale and (b) the strengthening of control through segregation of duties.
f. Totally decentralized systems. Data processing functions are independently devel-
oped at each remote site. Each site has its own smaller computer and its own staff.
1) In a completely decentralized system, each computer stands alone, independent
of any centralized or other computer.
2) The primary advantages of a decentralized system are that
a)
The individual units personnel identify more closely with the system.
b)
Development projects are more easily accepted and meet local needs
better.
g. Downsizing consists of moving organization-wide applications to mid-range or
networked computers. The purpose is to reduce costs by using less expensive
systems that are more versatile than larger, more expensive systems.
1) Factors to consider when downsizing include the following:
a)
Downsized applications are less reliable than their mainframe
predecessors because they usually are newer and thus have not been
tested extensively.
b) Downsized technology is less reliable and lacks the monitoring and control
features that permit recovery from minor processing interruptions.
c) Security is better on larger mainframe systems.
d) Downsized applications increase complexity because data becomes
fragmented across multiple systems.
e) Applications with very large databases still require very large computers
(i.e., mainframes) in order to make data accessible.
f) Mainframes allow for easier and cheaper data entry.
h. Distributed Data Processing
1) In a distributed data processing system, the organizations processing needs are
examined in their totality.
2) Information is analyzed to determine which parts of the application are better
performed locally and which parts are better performed at some other, possibly
centralized, site.
a) For example, an organization may prefer to use workstations rather than a
mainframe for ad hoc queries to avoid the expense and the possible
degradation of response time on the host computer.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 145
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
146 SU 5: Information Technology I
6. Computer auditing. One Specific Attribute Standard and one Practice Advisory address
this topic.
1220 Due Professional Care Internal auditors should apply the care and skill
expected of a reasonably prudent and competent internal auditor. Due
professional care does not imply infallibility.
a. PRACTICE ADVISORY 1220-2: COMPUTER ASSISTED AUDIT TECHNIQUES
(CAATs)
NOTE: The content outline uses the term information technology (IT). However,
certain Practice Advisories use the synonym information systems (IS).
1. NEED FOR GUIDANCE
Computer Assisted Audit Techniques (CAATs) are important tools for the
auditor in performing audits. CAATs include many types of tools and
techniques, such as generalized audit software, utility software, test data,
application software tracing and mapping, and audit expert systems. CAATs
may be used in performing various audit procedures, including:
Tests of details of transactions and balances
Analytical review procedures
Compliance tests of IS general controls
Compliance tests of IS application controls
Penetration testing (efforts to evade security measures to probe for
weaknesses)
CAATs may produce a large proportion of the audit evidence developed on
audits and, as a result, the auditor should carefully plan for and exhibit due
professional care in the use of CAATs.
2. PLANNING
Decision Factors for Using CAATs
When planning the audit, the auditor should consider an appropriate
combination of manual techniques and CAATs. In determining whether to use
CAATs, the factors to be considered include:
Computer knowledge, expertise, and experience of the auditor
Availability of suitable CAATs and IS facilities
Efficiency and effectiveness of using CAATs over manual techniques
Time constraints
Integrity of the information system and IT environment
Level of audit risk
CAATs Planning Steps
The major steps to be undertaken by the auditor in preparing for the application
of the selected CAATs are
Set the audit objectives of the CAATs
Determine the accessibility and availability of the organizations IS
facilities, programs/systems, and data
Define the procedures to be undertaken (e.g., statistical sampling,
recalculation, confirmation, etc.)
Define output requirements
Determine resource requirements, e.g., personnel, CAATs, processing
environment (organizations IS facilities or audit IS facilities)
Obtain access to the organizations IS facilities, programs/system, and
data, including file definitions
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 147
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
148 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 149
Execution
Documentation should include:
CAATs preparation and testing procedures and controls
Details of the tests performed by the CAATs
Details of inputs (e.g., data used, file layouts), processing (e.g., CAATs
high-level flowcharts, logic) and outputs (e.g., log files, reports)
Listing of relevant parameters or source code
Audit Evidence
Documentation should include:
Output produced
Description of the audit analysis work performed on the output
Audit findings
Audit conclusions
Audit recommendations
5. REPORTING
Description of CAATs
The objectives, scope, and methodology section of the report should contain a
clear description of the CAATs used. This description should not be overly
detailed, but it should provide a good overview for the reader. The description
of the CAATs used should also be included in the body of the report, where the
specific finding relating to the use of the CAATs is discussed. If the
description of the CAATs used is applicable to several findings, or is too
detailed, it should be discussed briefly in the objectives, scope, and
methodology section of the report and the reader referred to an appendix with a
more detailed description.
Appendix - Glossary
Application Software Tracing and Mapping: Specialized tools that can be
used to analyze the flow of data through the processing logic of the application
software and document the logic, paths, control conditions, and processing
sequences. Both the command language or job control statements and
programming language can be analyzed. This technique includes program/
system mapping, tracing, snapshots, parallel simulations, and code
comparisons.
Audit Expert Systems: Expert or decision support systems that can be used
to assist auditors in the decision-making process by automating the knowledge
of experts in the field. This technique includes automated risk analysis, system
software, and control objectives software packages.
Computer Assisted Audit Techniques (CAATs): Any automated audit
techniques, such as generalized audit software, utility software, test data,
application software tracing and mapping, and audit expert systems.
Generalized Audit Software: A computer program or series of programs
designed to perform certain automated functions. These functions include
reading computer files, selecting data, manipulating data, sorting data,
summarizing data, performing calculations, selecting samples, and printing
reports or letters in a format specified by the auditor. This technique includes
software acquired or written for audit purposes and software embedded in
production systems.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
150 SU 5: Information Technology I
Test Data: Simulated transactions that can be used to test processing logic,
computations, and controls actually programmed in computer applications.
Individual programs or an entire system can be tested. This technique includes
Integrated Test Facilities (ITFs) and Base Case System Evaluations (BCSEs).
Utility Software: Computer programs provided by a computer hardware
manufacturer or software vendor and used in running the system. This
technique can be used to examine processing activity, test programs, review
system activities and operational procedures, evaluate data file activity, and
analyze job accounting data.
PA Summary
CAATs may be used to obtain audit evidence through tests of details and controls,
analytical review, and penetration testing efforts to evade security measures.
The auditor should plan the use of CAATs and apply them with due professional
care.
When planning the audit, the auditor should consider an appropriate combination of
manual techniques and CAATs. When deciding whether to use CAATs, the
auditor considers such factors as (1) their availability, (2) the relative advantages
of manual procedures, (3) his/her own expertise, (4) availability of IS facilities,
(5) time limits, (6) the degree of audit risk, and (7) integrity of the information
system and IT environment.
Planning involves (1) setting audit objectives, (2) resolving accessibility and
availability issues, (3) defining procedures, (4) determining output and resource
requirements, and (5) documenting the CAATs to be used.
The auditor should arrange for retention of data covering the audit time frame and
access to (1) facilities, (2) programs, (3) systems, and (4) data in advance of the
needed time period to minimize the effect on the production environment. The
auditor also assesses the effects of changes in production programs and systems.
The auditor should obtain reasonable assurance of the (1) integrity, (2) reliability,
(3) usefulness, and (4) security of CAATs before relying on them. The auditor
should document procedures to provide this assurance, for example, a review of
program maintenance and change controls over embedded audit software to
determine that only authorized changes were made. When CAATs are not
controlled by the auditor, appropriate control should be in effect. When CAATs
are changed, the auditor should obtain assurance through appropriate planning,
design, testing, processing, and review of documentation.
The auditor should verify the integrity of the IS and IT environment from which
sensitive information is extracted using CAATs. Furthermore, the auditor should
safeguard the program/system information and production data obtained with
appropriate confidentiality and security. For this purpose, the auditor considers
the requirements of the auditee and relevant legislation.
The use of CAATs should be controlled by the auditor. Thus, the auditor
(1) reviews output, (2) reconciles control totals, (3) reviews characteristics of the
CAATs, and (4) reviews the relevant general IS controls.
An issue in the use of generalized audit software is data integrity. For embedded
audit software, another issue is system design.
Use of utility software raises issues regarding unplanned intervention during
processing, whether the software is from the appropriate system library, and the
integrity of systems and files.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 151
The test data approach does not evaluate actual production data. It is also
complex and time consuming and may contaminate the live system.
An auditor using application software tracing and mapping verifies that the source
code evaluated generated the production program. But tracing and mapping does
not evaluate actual production data.
An auditor using an expert system confirms that decision paths are appropriate.
Working papers should sufficiently document the application of CAATs. Planning
documentation extends to (1) CAATs objectives, (2) CAATs used, (3) controls,
(4) staffing, and (5) timing. Documentation of execution includes (1) preparation
and testing procedures and controls; (2) details of the tests; (3) details on input,
processing, and output; and (4) listing of relevant parameters or source code.
Documentation of audit evidence includes (1) output; (2) analysis of the output;
and (3) audit findings, conclusions, and recommendations.
The objectives, scope, and methodology section of the report should clearly
describe the CAATs used. If the description is not too detailed and does not
apply to several findings, it is included with the discussion in the body of the report
of a specific finding related to use of CAATs.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
152 SU 5: Information Technology I
2) The leading CAAT software packages are currently Audit Command Language
(ACL) and Interactive Data Extraction and Analysis (IDEA) software. These
software packages, designed specifically for use in auditing, perform 11 major
functions.
a) Aging. An auditor can test the aging of accounts receivable.
b) Duplicate identification. Duplicate data can be organized by data field and
subsequently identified.
c) Exportation. Data can be transferred to other software.
d) Extraction. Data can be extracted for exception analysis.
e) Gap identification. Gaps in information can be automatically noted.
f) Joining and merging. Two separate data files may be joined or merged to
combine and match information.
g) Sampling. Samples of the data can be prepared and analyzed.
h) Sorting. Information can be sorted by any data field.
i) Stratification. Large amounts of data can be organized by specific factors,
thereby facilitating analysis.
j) Summarization. Data can be organized to identify patterns.
k) Total fields. Totals for numeric fields can be quickly and accurately
calculated.
c. Using the integrated test facility (ITF) method, the auditor creates a dummy record
within the clients actual system (e.g., a fictitious employee in the personnel and
payroll file). Dummy and actual transactions are processed (e.g., time records for the
dummy employee and for actual employees). The auditor can test the edit checks by
altering the dummy transactions and evaluating error listings.
1) The primary advantage of this method is that it tests the actual program in
operation.
2) The primary disadvantages are that the method requires considerable
coordination, and the dummy transactions must be purged prior to internal and
external reporting. Thus, the method is not used extensively by external
auditors.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 153
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
154 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 155
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
156 SU 5: Information Technology I
4. All computers have at least one central processing unit (CPU) that works in conjunction
with peripheral devices.
a. The CPU is the main element of a computer system. The major function of the CPU is
to fetch stored instructions and data, decode the instructions, and carry out the
instructions in the arithmetic-logic unit (ALU). The principal components of the
CPU are the ALU (one or more) and the control unit.
1) The control unit directs and coordinates the entire computer system.
2) Primary storage is closely connected to the CPU in the central processor. It
consists of electronic components that store letters, numbers, and special
characters used in processing. The purposes of primary storage are to hold the
operating system, part or all of a program being executed, and data used by the
program.
a) Internal primary storage includes register memory, used for ultra high
speed and very brief storage of small amounts of data and instructions
immediately before use; cache memory, used for high-speed storage of
frequently used data and programs; and random-access memory
(RAM), used to store large quantities of data. Data may be read from or
written on RAM. A power interruption causes erasure of RAM.
b) Read-only memory (ROM) is permanent storage used to hold the basic
low-level programs and data. ROM can be read from but not written to;
ROM chips are obtained from the manufacturer with programs already
stored in them. A power interruption does not erase data written on ROM
or on magnetic secondary storage devices. However, a power
interruption may corrupt the data.
i) Programmable ROM (PROM) can be programmed once.
ii) Erasable programmable ROM (EPROM) can be erased using a
special process and then reprogrammed.
b. Computer systems are typically classified by computing power, dictated by CPU
speed, memory capacity, and hardware architecture (and thus cost).
1) Personal computers (microcomputers) range in price and performance from
low-end personal computers to powerful desktop models. Workstations are
also desktop machines, but they are often classified separately from personal
computers because of their enhanced mathematical and graphical abilities. In
addition, workstations have the capacity to execute more complicated tasks
simultaneously than a personal computer. Thus, they tend to be used by
scientists and engineers, for example, for simulations and computer-aided
design, and in the financial services industry.
a) Because of the large number of personal computers in use and aggressive
pricing strategies, current personal computer prices have become very
attractive. Moreover, personal computers have crossed into the
minicomputer arena by providing comparable power and multi-user
capabilities previously unavailable until recent technological
improvements.
b) By adding a modem and communications software, the personal computer
can also serve as an interface with other computers. Accordingly, many
of the same control and security concerns that apply to larger computers
also apply to a personal computer environment.
c) Notebook, laptop, and palmtop computers are the smallest forms of
personal computers.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 157
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 159
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
160 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 161
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
162 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 163
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
164 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 165
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
166 SU 5: Information Technology I
b. Application software includes the programs that perform the tasks required by end
users, e.g., standard accounting operations.
1) Applications may be developed internally or purchased from vendors.
a) Vendor-produced software is in either source code (not machine
language) or object code (machine language), but vendors prefer to sell
the latter.
b) Application software production is a vital aspect of system development,
and control over its maintenance (changes to meet new user needs) after
implementation is likewise crucial.
2) A spreadsheet is one type of application software that is especially helpful to
accountants, auditors, and business people. It displays an on-screen financial
model in which the data are presented in a grid of columns and rows. An
example is a financial statement spreadsheet.
a) An electronic spreadsheet permits the creation of a template containing the
model of the mathematical relationships among the variables as defined
by the user. It specifies the inputs, the computational algorithms, and the
format of the output. The effects of changes in assumptions can be seen
instantly because a change in one value results in an immediate
recomputation of related values.
b) Thus, in designing a spreadsheet model, the first step is to define the
problem. This step is followed by an identification of relevant inputs and
outputs, and the development of assumptions and decision criteria.
Finally, formulas must be documented.
c) Excel and Lotus 1-2-3 are common spreadsheet programs.
3) Software is copyrightable, but a substantial amount is in the public domain.
Networks of computer users may share such software.
a) Shareware is software made available for a fee (usually with an initial free
trial period) by the owners to users through a distributor (or websites or
electronic bulletin board services).
b) Software piracy is a problem for vendors. The best way to detect an
illegal copy of application software is to compare the serial number on the
screen with the vendors serial number.
i) Use of unlicensed software increases the risk of introducing
computer viruses into the organization. Such software is less likely
to have been carefully tested.
ii) To avoid legal liability, controls also should be implemented to
prevent use of unlicensed software that is not in the public domain.
A software licensing agreement permits a user to employ either a
specified or an unlimited number of copies of a software product at
given locations, at particular machines, or throughout the
organization. The agreement may restrict reproduction or resale,
and it may provide subsequent customer support and product
improvements.
iii) Software piracy can expose an organizations people to both civil (up
to $150,000 for each program copied) and criminal (up to
$250,000, five years in jail, or both) penalties. The Business
Software Alliance (BSA) is a worldwide trade group that
coordinates software vendors efforts to prosecute the illegal
duplication of software.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 167
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
168 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 169
2) File attributes can be assigned to control access to and the use of files.
Examples are read/write, read only, archive, and hidden.
3) A device authorization table restricts file access to those physical devices that
should logically need access. For example, because it is illogical for anyone to
access the accounts receivable file from a manufacturing terminal, the device
authorization table will deny access even when a valid password is used.
a) Such tests are often called compatibility tests because they ascertain
whether a code number is compatible with the use to be made of the
information. Thus, a user may be authorized to enter only certain kinds of
data, have access only to certain information, have access but not
updating authority, or use the system only at certain times. The lists or
tables of authorized users or devices are sometimes called access
control matrices.
4) A system access log records all attempts to use the system. The date and
time, codes used, mode of access, data involved, and operator interventions
are recorded.
5) Encryption involves using a fixed algorithm to manipulate plaintext. The
information is sent in its manipulated form and the receiver translates the
information back into plaintext. Although data may be accessed by tapping into
the transmission line, the encryption key is necessary to understand the data
being sent.
a)For example, a web server (a computer that delivers web pages to the
Internet) should be secure. It should support a security protocol that
encrypts messages to protect transactions from third party detection or
tampering.
b) See item 6. in this subunit below.
6) A callback feature requires the remote user to call the computer, give
identification, hang up, and wait for the computer to call the users authorized
number. This control ensures acceptance of data transmissions only from
authorized modems. However, call forwarding may thwart this control.
7) Controlled disposal of documents. One method of enforcing access
restrictions is to destroy data when they are no longer in use. Thus, paper
documents may be shredded and magnetic media may be erased.
8) Biometric technologies. These are automated methods of establishing an
individuals identity using physiological or behavioral traits. These
characteristics include fingerprints, retina patterns, hand geometry, signature
dynamics, speech, and keystroke dynamics.
9) Automatic log-off (disconnection) of inactive data terminals may prevent the
viewing of sensitive data on an unattended data terminal.
10) Utility software restrictions. Utility software may have privileged access and
therefore be able to bypass normal security measures. Performance monitors,
tape and disk management systems, job schedulers, online editors, and report
management systems are examples of utility software. Management can limit
the use of privileged software to security personnel and establish audit trails to
document its use. The purpose is to gain assurance that its uses are
necessary and authorized.
11) Security personnel. An organization may need to hire security specialists. For
example, developing an information security policy for the organization,
commenting on security controls in new applications, and monitoring and
investigating unsuccessful access attempts are appropriate duties of the
information security officer.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 171
3. Application Controls
a. Crucial to the development of programs for particular applications is the inclusion of
application controls (input, processing, and output controls). Input controls provide
reasonable assurance that data received for processing have been properly
authorized, converted into machine-sensible form, and identified and that data
(including data transmitted over communication lines) have not been lost,
suppressed, added, duplicated, or otherwise improperly changed. Input controls also
relate to rejection, correction, and resubmission of data that were initially incorrect.
1) Edit checks, such as those discussed below and on the next page, are
programmed into the software.
a) Error listing. Editing (validation) of data should produce a cumulative
automated error listing that includes not only errors found in the current
processing run but also uncorrected errors from earlier runs. Each error
should be identified and described, and the date and time of detection
should be given. Sometimes, the erroneous transactions may need to be
recorded in a suspense file. This process is the basis for developing
appropriate reports.
b) Field checks are tests of the characters in a field to verify that they are of
an appropriate type for that field. For example, the field for a Social
Security number should not contain alphabetic characters.
c) Financial totals summarize dollar amounts in an information field in a
group of records.
d) A hash total is a control total without a defined meaning, such as the total
of employee numbers or invoice numbers, that is used to verify the
completeness of data. Thus, the hash total for the employee listing by the
personnel department could be compared with the total generated during
the payroll run.
e) Limit and range checks are based on known limits for given information.
For example, hours worked per week will not equal 200.
f) Preformatting. To avoid data entry errors in online systems, a screen
prompting approach may be used that is the equivalent of the preprinted
forms routinely employed as source documents. The dialogue approach,
for example, presents a series of questions to the operator. The
preformatted screen approach involves the display of a set of boxes for
entry of specified data items. The format may even be in the form of a
copy of a transaction document.
g) Reasonableness (relationship) tests check the logical correctness of
relationships among the values of data items on an input and the
corresponding master file record. For example, it may be known that
employee John Smith works only in departments A, C, or D; thus, a
reasonableness test could be performed to determine that the payroll
record contains one of the likely department numbers. In some texts, the
term reasonableness test is defined to encompass limit checks.
h) Record count is a control total of the number of records processed during
the operation of a program.
i) Self-checking digits may be used to detect incorrect identification
numbers. The digit is generated by applying an algorithm to the ID
number. During the input process, the check digit is recomputed by
applying the same algorithm to the code actually entered.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
172 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 173
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
174 SU 5: Information Technology I
c. A firewall separates an internal network from an external network (e.g., the Internet)
and prevents passage of specific types of traffic. It identifies names, Internet Protocol
(IP) addresses, applications, etc., and compares them with programmed access
rules.
1) A firewall may have any of the following features:
a) A packet filtering system examines each incoming network packet and
drops (does not pass on) unauthorized packets.
b) A proxy server maintains copies of web pages to be accessed by
specified users. Outsiders are directed there, and more important
information is not available from this access point.
c) An application gateway limits traffic to specific applications.
d) A circuit-level gateway connects an internal device, e.g., a network
printer, with an outside TCP/IP port. It can identify a valid TCP session.
e) Stateful inspection stores information about the state of a transmission
and uses it as background for evaluating messages from similar sources.
2) Firewall systems ordinarily produce reports on organization-wide Internet use,
unusual usage patterns, and system penetration attempts. These reports are
very helpful to the internal auditor as a method of continuous monitoring, or
logging, of the system.
a)Firewalls do not provide adequate protection against computer viruses.
Thus, an organization should include one or more of the antivirus
measures discussed in Study Unit 6 in its network security policy.
d. Data traveling across the network can be encoded so that it is indecipherable to
anyone except the intended recipient.
e. Other Controls
1) Authentication measures verify the identity of the user, thus ensuring that only
the intended and authorized users gain access to the system.
a) Most firewall systems provide authentication procedures.
b) Access controls are the most common authentication procedures.
2)
Checksums help ensure the integrity of data by checking whether the file has
been changed. The system computes a value for a file and then proceeds to
check whether this value equals the last known value for this file. If the
numbers are the same, the file has likely remained unchanged.
5. Data Storage
a. Storing all related data on one storage device creates security problems.
1) If hardware or software malfunctions occur, or unauthorized access is achieved,
the results could be disastrous.
2) Greater emphasis on security is required to provide backup and restrict access
to the database.
a) For example, the system may employ dual logging, that is, use of two
transaction logs written simultaneously on separate storage media. It
may also use a snapshot technique to capture data values before and
after transaction processing. The files that store these values can be
used to reconstruct the database in the event of data loss or corruption.
3) The responsibility for creating, maintaining, securing, and restricting access to
the database belongs to the Database Administrator (DBA).
4) A database management system (DBMS) includes security features. Thus, a
specified users access may be limited to certain data fields or logical views
depending on the individuals assigned duties.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 175
6. Encryption
a. Encryption technology converts data into a code. A program codes data prior to
transmission. Another program decodes it after transmission. Unauthorized users
may still be able to access the data, but, without the encryption key, they will be
unable to decode the information.
b. Encryption software uses a fixed algorithm to manipulate plaintext and an encryption
key to introduce variation. The information is sent in its manipulated form
(cyphertext), and the receiver translates the information back into plaintext.
Although data may be accessed by tapping into the transmission line, the encryption
key is necessary to understand the data being sent. The machine instructions
necessary to code and decode data can constitute a 20-to-30% increase in system
overhead.
c. Encryption technology may be either hardware- or software-based. Two major types
of encryption software exist.
1) Public-key, or asymmetric, encryption requires two keys: The public key for
coding messages is widely known, but the private key for decoding messages
is kept secret by the recipient. Accordingly, the parties who wish to transmit
coded messages must use algorithmically related pairs of public and private
keys. The sender searches a directory for the recipients public key, uses it to
encode the message, and transmits the message to the recipient. The recipient
uses the public key and the related private (secret) key to decode the message.
a) One advantage of public-key encryption is that the message is encoded
using one key and decoded using another. In contrast, private-key
encryption requires both parties to know and use the secret key.
b) A second advantage is that neither party knows the others private key.
The related public-key and private-key pair is issued by a certificate
authority (a third-party fiduciary, e.g., VeriSign or Thawte). However, the
private key is issued only to one party.
i) Thus, key management in a public-key system is more secure than
in a secret-key system because the parties do not have to agree on,
transmit, and handle the one secret key.
c) RSA, named for its developers (Rivest, Shamir, and Adelman), is the most
commonly used public-key method.
d) A public-key system is used to create digital signatures (fingerprints).
i) A digital signature is a means of authentication of an electronic
document, for example, of the validity of a purchase order,
acceptance of a contract, or financial information.
The sender uses its private key to encode all or part of the
message, and the recipient uses the senders public key to
decode it. Hence, if that key decodes the message, the
sender must have written it.
One variation is to send the message in both plaintext and
cyphertext. If the decoded version matches the plaintext
version, no alteration has occurred.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
176 SU 5: Information Technology I
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
SU 5: Information Technology I 177
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com
178 SU 5: Information Technology I
g. Hot-site and cold-site backup facilities. A hot site is a service bureau. It is a fully
operational processing facility that is immediately available. A cold site is a shell
facility where the user can quickly install computer equipment.
1) A hot site with updated software and data that can begin operations in minutes is
a flying-start site.
Copyright 2008 Gleim Publications, Inc. and/or Gleim Internet, Inc. All rights reserved. Duplication prohibited. www.gleim.com