Sunteți pe pagina 1din 15

Ohio Department of Mental Retardation and

Developmental Disabilities
Requirements Specification
CART Test Plan
Version 0.2<Project Name>

Revision History
Date

Version

Description

Author

4/15/2005

V0.1

Draft creation

PVK

4/21/2005

V0.2

Draft update

PVK

Table of Contents
1.

Introduction
1.1
Purpose
1.2
Background
1.3
Scope
1.4
Project Identification

2.

Requirements for Test

3.

Test Strategy
3.1
Testing Types
3.1.1 Function Testing
3.1.2 User Interface Testing
3.1.3 Security and Access Control Testing
3.1.4 Recovery Testing
3.2
Tools

4.

Resources
4.1
Roles
4.2
System

5.

Project Milestones

6.

Deliverables
6.1
Test Logs
6.2
Defect Reports

Appendix A

Project Tasks

7
8
9
10

Test Plan
1.

Introduction

1.1

Purpose
This Test Plan document for the CART Project supports the following objectives:

Perform system testing in a stable test environment to ensure that new or existing functionality
meets requirements, desired goals or outcome on an iterative basis (periodically and
continuously).

Recommended requirements are that functional business units validate the application continues to
meet stated requirements and that defects are logged and assigned for resolution using an issue
tracking system provided for your use.

Usage of a testing matrix checklist will be followed for each iteration to ensure that all test cases
defined or required for any/all iterations are completed or, if failure exists, documentation is
created to detail what conditions were active that provided the failure.

The resources required for testing the application is as follows (for functional business area,
estimated):

1.2

Accreditation: 1 to 2 hours per iteration (depending on feature set)

Licensure: 1 hour per iteration

SLQA / Provider Compliance: 1 hour per iteration

Deliverable elements of the test project are:


o

Completed test plan, indicating a pass or fail for respective business areas

Documentation supporting failure incidents and steps to recreate

Sign-off of functionality for that iteration

Background
The purpose of this testing approach, being iterative, has been proven to be very instrumental in ensuring
that an application is continuously monitored and validated for functionality and defects early on in the
project plan life. By performing these steps of testing iteratively, problems arise long before a production
implementation where the resolution to defects is more difficult, costly and detrimental to a projects
success.
The amount of time spent in iterative testing generally does not consume any more than if testing were
done traditionally, at the end of a project timeline. Additionally, by identifying missing requirements or
defects early on, usually results in less time spent on functional validation and defect prevention.

1.3

Scope
The stages of testing being performed during the initial development of the CART application is as follows:

Unit testing to be done by development staff

System testing to be done by business area staff

Integration testing may not necessarily be done, due to scope, but may occur, and if so, by
development staff

The list of features that will be tested will be as follows:

Application usability ease of use, look and feel and general navigation are covered here

Functional aspects requirements will be used to cross-reference with functional areas

Disconnected environment laptop / desktop environment will be used to validate the deployable
nature of the application functions separated from the network

Web environment web interface functionality is crucial to the success of this application and
thus must be validated

Database recovery laptop database usage and recovery will be tested to provide a stable
environment for remote use, including database backup and recovery

Exception handling network failure, connection issues and general application failure points will
be tested to ensure proper handling / messaging of these conditions is communicated to the
application user

If testing by the business functional area owners is not performed during each iteration then there is a risk
that missing requirements and/or defects may be part of the finished application. It is imperative that the
business groups engage in this process continuously and interactively with the technology group to ensure
proper communication of issues and successes are performed timely.

1.4

Project Identification
The table below identifies the documentation and availability used for developing the test plan:

Document

Created or
Available

Received or
Reviewed

Author or
Resource

Requirements Specification

Yes No

Yes No

Functional Specification

Yes No

Yes No

Use-Case Reports

Yes No

Yes No

Project Plan

Yes No

Yes No

AW

Design Specifications

Yes No

Yes No

PVK

Prototype

Yes No

Yes No

PVK

Users Manuals

Yes No

Yes No

Business Model or Flow

Yes No

Yes No

AW

Data Model or Flow

Yes No

Yes No

PVK

Business Functions and Rules

Yes No

Yes No

Project or Business Risk


Assessment

Yes No

Yes No

2.

Notes

PVK

Requirements for Test


The listing below identifies those items -- functional requirements, and non-functional requirements that

have been identified as targets for testing. This list represents what will be tested:

Survey Creation
o

Create Survey Groups

Create Survey Definitions

Create Survey Sections

Create Survey Layouts

Create Questions

Create Answers

Survey Maintenance
o

3.

Ability to change, delete items created during Survey Creation

Survey Administration
o

Role / User administration

Survey Status changes published, unavailable, deleted

User / Manager email accounts and notification chain

Survey Conduction
o

Laptop / desktop environment

Web environment

Data transfer
o

Uploading of survey data to server

Downloading of survey data to laptop/desktop

Downloading of completed / incomplete surveys to laptop/desktop

Desktop application update notification


o

Initial download of application to laptop/desktop

Update laptop/desktop application with updates

Test Strategy
The list above detailed what is to be tested. This section describes just how this is to be accomplished.
Each bulleted item will be tested, initially individually as it becomes available for testing. Eventually, all
items will be tested for a final system/integration test that will validate the application is complete and
meets defined requirements.

Survey Creation follow the steps for creating a survey


o

Create Survey Groups create a group containing 1 or more surveys. All surveys belong
to at least 1 group. A group is defined as containing all the surveys that need to be
performed for that survey step to be completed. In some cases, a starting survey needs to
be performed before others, thus there is a dependency in survey order and completion.

Create Survey Definitions the definition contains the details about the survey, such as
its name, status, whether it can be versioned, etc. Without the definition, a survey cannot
exist.

Create Survey Sections all surveys have at least 1 section and may have as many as is
required. The sections are required as this is where the questions are assigned.

Create Survey Layouts the layout simply defines the heading/sub-heading that is shown
on the interfaces detailing what survey is being conducted. Additional properties such as
a cover page, logos (images) and the like are configured in this interface.

Create Questions without questions, a survey has no value. Through this process,
questions can be validated against duplication (within the possible realm of finding
duplicates) and can be assigned to rules.

Create Answers answers need to be provided for questions. The answer interface is
somewhat complicated as it provides a wide variety of choices for answer type, source of
data for the answer list and how the answer is to appear (where applicable).

Survey Maintenance
o

Ability to change, delete items created during Survey Creation standard maintenance
functionality

Survey Administration
o

Role / User administration the assignment of users to roles (user, administrator,


manager) and the roles to surveys is a necessary function for safety and security.

Survey Status changes published, unavailable, deleted the change of a surveys status
has repercussions, especially if surveys are being conducted and the survey is to no
longer be used. Also, changes need to be communicated to conductors and managements
thus the communication channel test is needed.

User / Manager email accounts and notification chain this step is needed to ensure that
the communications detailed in Survey Status Change are successful.

Survey Conduction
o

Laptop / desktop environment clearly, the survey once defined is to be executed. The
laptop is the primary environment to be tested, as it is the disconnected environment that
has priority and complexity that needs to be validated.

Web environment surveys can be completed through the web and the usage of web
based surveys will certainly grow over time. This environment will need to be validated
as well as the laptop environment.

Data transfer
o

Uploading of survey data to server once the laptop environment contains survey data, it
needs to be uploaded to the survey for further processing. The upload process introduces
a few layers of complication and needs to be tested well to ensure that data is not lost and
uploaded successfully.

Downloading of survey data to laptop/desktop in order to conduct a survey on the


laptop, it needs to be downloaded. This is the stage where this test takes place.

Downloading of completed / incomplete surveys to laptop/desktop if the data from a


completed survey or partial survey is needed to be downloaded to the laptop (to continue
or update the survey) then that transfer process needs validated.

Desktop application update notification


o

Initial download of application to laptop/desktop in order to conduct a survey on the


laptop, the application needs loaded. Initially, the application is downloaded to the laptop
and is the current version available. The download and install/execution needs to be

validated.
o

Update laptop/desktop application with updates should changes occur (and they will) to
the laptop/desktop application, those changes need to be redistributed back to the
laptop/desktop.

3.1

Testing Types

3.1.1

Function Testing
Function testing of the various areas should focus on any requirements for test that can be traced directly to
business functions and business rules. The goals of these tests are to verify proper data acceptance,
processing, and retrieval, and the appropriate implementation of the business rules. This type of testing is
based upon black box techniques; that is verifying the application and its internal processes by interacting
with the application via the Web or GUI interfaces and analyzing the output or results. Identified below is
an outline of the testing recommended for each application area:

Test Objective:

Ensure proper test of functionality, including navigation, data entry, processing,


and retrieval.

Technique:

Execute each use case, use-case flow, or function, using valid and invalid data,
to verify the following:

Completion Criteria:

Special Considerations:

The expected results occur when valid data is used.

The appropriate error or warning messages are displayed when invalid


data is used.

Each business rule is properly applied (where defined).

All planned tests have been executed.

All identified defects have been addressed.

Identify or describe those items or issues (internal or external) that impact the
implementation and execution of function test

3.1.2

User Interface Testing


User Interface (UI) testing verifies a users interaction with the software. The goal of UI testing is to ensure
that the User Interface provides the user with the appropriate access and navigation through the functions of
the functional areas. In addition, UI testing ensures that the objects within the UI function as expected and
conform to corporate or industry standards.

Test Objective:

Verify the following:

Navigation through functional areas properly reflects business


functions and requirements, including window-to-window, field-tofield, and use of access methods (tab keys, mouse movements,
accelerator keys)

Window objects and characteristics, such as menus, size, position,


state, and focus conform to standards.

Technique:

Create or modify tests for each window to verify proper navigation and content
for each application window or web page

Completion Criteria:

Each window/ page successfully verified to remain consistent with benchmark


version or within acceptable standard

Special Considerations:

3.1.3

Security and Access Control Testing


Security and Access Control Testing focus on two key areas of security:

Application-level security, including access to the Data or Business Functions

System-level Security, including logging into or remote access to the system.

Application-level security ensures that, based upon the desired security, actors are restricted to specific
functions or use cases, or are limited in the data that is available to them. For example, everyone may be
permitted to enter data and create new accounts, but only managers can delete them. If there is security at
the data level, testing ensures that user type one can see all customer information, including financial
data, however, user two only sees the demographic data for the same client.
System-level security ensures that only those users granted access to the system are capable of accessing
the applications and only through the appropriate gateways.

Test Objective:

Technique:

Completion Criteria:
Special Considerations:

Verify the following:

Application-level Security: Verify that an user can access


only those functions or data for which their user type is
provided permissions.

System-level Security: Verify that only those users with


access to the system and applications are permitted to access
them.

Application-level Security: Identify and list each user type and the
functions or data each type has permissions for.

Create tests for each user type and verify each permission by
performing actions specific to each user type.

Modify user type and re-run tests for same users. In each case,
verify those additional functions or data are correctly
available or denied.

For each known user type the appropriate function or data are
available, and all actions function as expected

3.1.4

Recovery Testing
Recovery Testing ensures that the target-of-test can successfully recover from a variety of hardware,
software or network malfunctions with undue loss of data or data integrity.
Recovery testing is an antagonistic test process in which the application or system is exposed to extreme
conditions, or simulated conditions, to cause a failure, such as device Input/Output (I/O) failures or invalid
database pointers and keys. Recovery processes are invoked and the application or system is monitored and
inspected to verify proper application, or system, and data recovery has been achieved.

Test Objective:

Technique:

Verify that recovery processes (manual or automated) properly restore


the database, applications, and system to a desired, known, state. The
following types of conditions are to be included in the testing:

Power interruption to the client

Communication interruption via network servers

Tests created for Function and Business Cycle testing should be used
to create a series of transactions. Once the desired starting test point is
reached, the following actions should be performed, or simulated,
individually:
Power interruption to the client: power the PC down / close the
application (hard terminate)
Interruption via network servers: simulate or initiate communication
loss with the network (physically disconnects communication wires or
power down network servers or routers).

Completion Criteria:

In all cases above, the application, database, and system should, upon
completion of recovery procedures, return to a known, desirable state.

Special Considerations:

Recovery testing is highly intrusive. Procedures to disconnect cabling


(simulating power or communication loss) may not be desirable or
feasible.

3.2
Tools
The following tools will be employed for this project:

Test Management
Defect Tracking
Project Management
DBMS tools

Tool

Vendor/In-house

Version

Excel

Microsoft

Any

Gemini

http://www.countersoft.com/

Latest

Microsoft / MySQL

2000 / 4.x

Unknown, or N/A
SQL Server / MySQL

4.
4.1

Resources
Roles
This table shows the staffing assumptions for the project.

Human Resources
Worker

Minimum Resources
Recommended

Specific Responsibilities or Comments

(number of full-time roles allocated)

Test Manager,

Provides management oversight.

Test Project Manager

Responsibilities:
1-3

Test Designer

provide technical direction

acquire appropriate resources

provide management reporting

Identifies, prioritizes, and implements test cases.


Responsibilities:
1-2

Tester

generate test plan

generate test model

evaluate effectiveness of test effort

Executes the tests.


Responsibilities:
5-x

Test System Administrator

execute tests

log results

recover from errors

document change requests

Ensures test environment and assets are


managed and maintained.
1

Database Administrator,
Database Manager

Responsibilities:

administer test management system

install and manage access to test systems

Ensures test data (database) environment and


assets are managed and maintained.
1-2

Responsibilities:

administer test data (database)

4.2

System
The following table sets forth the system resources for the testing project.
The specific elements of the test system are not fully known at this time. It is recommended that the system
simulate the production environment, scaling down the accesses and database sizes if and where
appropriate.

System Resources
Resource

Name / Type

Database Server
Network or Subnet

156.63.178.215

Server Name

MRDDTESTSQL

Database Name

CART

Client Test PC's


Include special configuration requirements

MySQL database installation (version 4.x)

Test Environment
Network or Subnet

156.63.178.216

Server Name

MRDDTESTWEB

5.

Project Milestones
Testing of CART should incorporate test activities for each of the test efforts identified in the previous
sections. Separate project milestones should be identified to communicate project status accomplishments.
The table below represents an example plan for the first phase of testing the CART application.

Milestone Task

Effort

Start Date

End Date

Plan Test

2 hours

5/2/2005

5/2/2005

Design Test

1 hour

5/2/2005

5/2/2005

Implement Test

1 hour

5/3/2005

5/3/2005

Execute Test

2-3 hours

5/9/2005

5/9/2005

Evaluate Test

1 hour

5/10/2005

5/10/2005

6.

Deliverables
In this section, the various documents, tools and reports that will be created and delivered are as follows,
including who is responsible for producing those deliverables:

6.1

Test Schedule schedule representing testing date(s) and resources identified to perform test
development staff
business staff
Test Plan document indicating functionality being tested in this iteration
development staff
Test Cases scripted test actions and processes for detailed requirements
development staff
business staff
Testing Results Report summary report of tests performed and results
development staff
Defect Report report identifying new defects found, or old defects which may still exist
development staff
Test Phase Status test iteration test status: identifying whether that iteration passed or failed
development staff

Defect Reports
The incident-tracking tool that will be provided will be used as a central repository for categories such as
project testing state, incident tracking and reporting.
All those involved with testing and evaluating the CART application will be provided with a user id to access
the tracking tool to accomplish their tasks regarding testing and incident reporting.

Appendix A

Project Tasks

Below are the test-related tasks:

Plan Test
-

identify requirements for test

assess risk

develop test strategy

identify test resources

create schedule

generate Test Plan

Design Test
-

identify and describe test cases

identify and structure test procedures

Implement Test
- identify test-specific functionality in the Design and Implementation Model
- establish external data sets or data to support test
- cleanse databases of previous test data, as needed (possible refreshes from snapshots)

Execute Test
-

execute Test procedures

evaluate execution of Test

recover from halted Test

verify the results

investigate unexpected results

log defects

Evaluate Test
-

evaluate Test-case coverage

analyze defects

determine if Test Completion Criteria and Success Criteria have been achieved

S-ar putea să vă placă și