Documente Academic
Documente Profesional
Documente Cultură
1. INTRODUCTION
8. STATUS REPORTING
o 8.1. Status Reporting
9.1. Issues/Risks
9.2. Assumptions
10. Signoff
11. APPENDICES
o 11.1. Purpose of Error Review Team.
o 11.2. Error Review Team Meeting Agenda.
o 11.3. Classification of Bugs
o 11.4. Procedure for maintenance of Error logging system.
o 11.5. Overnight Processing - Checking Bookkeeping & CIS
11.7. SOFTWARE QUALITY ASSURANCE MEASURES
o (i) DATES.
o (ii) EFFORT.
o (iii) VOLUME.
o (iv) QUALITY.
o (v) TURNAROUND.
1. INTRODUCTION
To aim of this phase of the project is to implement a new X System platform that will
enable:
This programme will result in significant changes to the current departmental and inter-
office processes. The functionality will be delivered on a phased basis.
This document is to serve as the Draft Test Approach for the Business Systems
Development Project.
The Test Approach sets the scope of system testing, the overall strategy to be
adopted, the activities to be completed, the general resources required and the
methods and processes to be used to test the release. It also details the activities,
dependencies and effort required to conduct the System Test.
Test Planning details the activities, dependencies and effort required to conduct the
System Test.
There will be several formal review points before and during system test. This is a vital
element in achieving a quality product.
1. Design Documentation
2. Testing Approach
3. Unit Test Plans
4. Unit Test Conditions & Results
5. System Test Conditions
6. System Test Progress
7. Post System Test Review
The software delivered interfaces correctly with existing systems, including Windows
98.
The above V Model shows the optimum testing process, where test preparation commences
as soon as the Requirements Catalogue is produced. System Test planning commenced at
an early stage, and for this reason, the System test will benefit from Quality initiatives
throughout the project lifecycle.
The responsibility for testing between the Project & Software Qualtiy Assurance (S.Q.A.) is
as follows:
Phase 1 Deliverables
2.1.2. EXCLUSIONS
When the scope of each Phase has been agreed and signed off, no further inclusions will be
considered for inclusion in this release, except:
(1) Where there is the express permission and agreement of the Business Analyst and the
System Test Controller;
(2) Where the changes/inclusions will not require significant effort on behalf of the test
team (i.e. requiring extra preparation - new test conditions etc.) and will not adversely
affect the test schedule.
The diagram above outlines the Test Process approach that will be followed.
a. Organise Project involves creating a System Test Plan, Schedule & Test Approach, and
requesting/assigning resources.
b. Design/Build System Test involves identifying Test Cycles, Test Cases, Entrance & Exit Criteria,
Expected Results, etc. In general, test conditions/expected results will be identified by the Test
Team in conjunction with the Project Business Analyst or Business Expert. The Test Team will then
identify Test Cases and the Data required. The Test conditions are derived from the Business
Design and the Transaction Requirements Documents
d. Build Test Environment includes requesting/building hardware, software and data set-ups.
e. Execute Project Integration Test - See Section 3 - Test Phases & Cycles
f. Execute Operations Acceptance Test - See Section 3 - Test Phases & Cycles
g. Signoff - Signoff happens when all pre-defined exit criteria have been achieved. See Section 2.4.
2.2.1. Exclusions
SQA will not deal directly with the business design regarding any design / functional issues /
queries.
The development team is the supplier to SQA - if design / functional issues arise they should
be resolved by the development team and its suppliers.
Requirements Catalogue
Business Design Specification
Year 2000 Development Standards
Other functional documents produced during the course of the project i.e. resolution
to issues/change requests/feedback.
This stage will also include Validation Testing - which is intensive testing of the new Front
end fields and screens. Windows GUI Standards; valid, invalid and limit data input; screen &
field look and appearance, and overall consistency with the rest of the application.
The third stage includes Specific Functional testing - these are low-level tests which aim
to test the individual processes and data flows.
The regression testing will be automated using the automated testing tool.
2.4.1. Entrance Criteria
The Entrance Criteria specified by the system test controller, should be fulfilled before
System Test can commence. In the event, that any criterion has not been achieved, the
System Test may commence if Business Team and Test Controller are in full agreement that
the risk is manageable.
All developed code must be unit tested. Unit and Link Testing must be completed
and signed off by development team.
System Test plans must be signed off by Business Analyst and Test Controller.
All human resources must be assigned and in place.
All test hardware and environments must be in place, and free for System test use.
The Acceptance Tests must be completed, with a pass rate of not less than 80%.
Acceptance Tests:
25 test cases will be performed for the acceptance tests. To achieve the acceptance criteria
20 of the 25 cases should be completed successfully - i.e. a pass rate of 80% must be
achieved before the software will be accepted for System Test proper to start. This means
that any errors found during acceptance testing should not prevent the completion of 80%
of the acceptance test applications.
Note: These tests are not intended to perform in depth testing of the software.
[For details of the acceptance tests to be performed see
X:\Testing\Phase_1\Testcond\Criteria.doc]
Resumption Criteria
In the event that system testing is suspended resumption criteria will be specified and
testing will not re-commence until the software reaches these criteria.
2.4.2. Exit Criteria
The Exit Criteria detailed below must be achieved before the Phase 1 software can be
recommended for promotion to Operations Acceptance status. Furthermore, I recommend
that there be a minimum 2 days effort Final Integration testing AFTER the final fix/change
has been retested. [See section 9.3]
All High Priority errors from System Test must be fixed and tested
If any medium or low-priority errors are outstanding - the implementation risk must
be signed off as acceptable by Business Analyst and Business Expert
Project Integration Test must be signed off by Test Controller and Business Analyst.
Business Acceptance Test must be signed off by Business Expert.
System Testing
Operations Acceptance Testing
The main thrust of the approach is to intensively test the front end in the first two releases,
thus raising approximately 80% of errors in this period. With the majority of these errors
fixed, standard and/or frequently used actions will be tested to prove individual elements
and total system processing in Release v0.3. Regression testing of outstanding errors will be
performed on an ongoing basis.
When all errors (which potentially impact overall processing) are fixed, an additional set of
test cases are processed in Release v0.4 to ensure the system works in an integrated
manner. It is intended that Release v0.4 be the final proving of the system as a single
application. There should be no A or B class errors outstanding prior to the start of Release
v0.4 testing.
Testing by Phase
Acceptance 1
Release v0.1 Functional 1
User Acceptance
Acceptance 2
Release v0.2 Functional 2
Regression 1
Acceptance 3
Functional 3
Release v0.3 Performance 1
Bash & Multi-User Testing
Regression 1
Regression 2
Integration 1
Technical 1
Release v0.4 Regression 1
Regression 2
Regression 3
Installation Test
Contingency Per Bug Fix Test Only
Release Schedule:
Notes:
It is intended that 80% of the functionality will have been tested in full prior to the Phase 3
Release.
All the functionality must be present in the Phase 3 Release.
No previously undelivered functionality will be accepted for testing after Phase 3.
The diagram above outlines the Test Approach. Boxes 1 - 6 show the major review stages
prior to Test execution. Boxes 7 - 10 show the phases planned for & after test execution.
While the above diagram concentrates on the testing aspect of SQA's role, there is an
ongoing role also, in ensuring the quality of the major deliverables throughout the lifecycle
of the project. SQA's role will be to ensure that all Quality Inspections occur for all the
agreed deliverables and that follow up actions and initiatives are pursued.
5.2. Hardware
One seperate, controlled system will be required for the initial phase
of testing, setup as per one standard, complete office environment.
In order to maintain the integrity of the test environment his network
will not be accessible to anybody outside this project. The printers are
also exclusively for use by the test network.
1 Network Controller
6 Networked PC's (See below)
1 DAP Workstation
1 Motorola 6520
1 Alpha AXP Server
1 Batch Waste Printer
1 HP LaserJet 4v Printer
PC Specifications
The 6 PC's required for the test environment will include the following:
1 x P100, 1Gb HD, 16Mb RAM [Current Minimum Specification]
3 x P166, 1.5Gb HD, 32Mb RAM [Current Standard Specification]
1 x P333, 2.5Gb HD, 64Mb RAM [Current Maximum Specification]
These specifications are the various specifications currently in use in different branches.
1 x Pentium running Windows NT is also required as the Test center for controlling and
executing the automated testing.
5.3. Software
Test IMS environments
Test IMS region X will be required for System Testing. Additional or amended
data will be populated where required.
[See Chapter x ]
Testers
Identify Test Data
Execute Test Conditions and Markoff results
Raise Software Error Reports
Administer Error Measurement System
IMS Support
Provide System Test Support
Support IMS Regions
Resolve Spooling Issues (if necessary)
Bookkeeping Integration & Compliance (if necessary)
Resolve queries arising from remote backup
Bookkeeping Support
Provide Bookkeeping Technical support, if required.
Resolve queries, if required.
Technical Support
Provide support for hardware environment
Provide support for Test software
Promote Software to system test environment
Access Support
Provide and support Test Databases
Errors, which are agreed as valid, will be categorised as follows by the Error Review Team :-
Category A - Serious errors that prevent System test of a particular function continuing
or serious data type error
Category B - Serious or missing data related errors that will not prevent implementation.
Category C - Minor errors that do not prevent or hinder functionality.
Category A errors should be turned around by Bug Fix Team in 48 hours (this is turn around
from time raised at Error Review Team meeting to time fix is released to System Test
environment). In the event of an A error that prevents System Test continuing, the turnaround
should be within 4 hours.
Category B errors should be turned around in 1 day; while
Category C errors should be turned around in 3 days.
However, the release of newer versions of the software will be co-ordinated with the Test
Controller - new versions should only be released when agreed, and where there is a definite
benefit (i.e. contains fixes to X or more numbers of bugs).
8. STATUS REPORTING
8.1. Status Reporting
Test preparation and Testing progress will be formally reported during a weekly Status Meeting.
The attendees at this meeting are :-
A status report will be prepared by the Test Controller to facilitate this meeting. This report will
contain the following information :-
2. The design of the software must be final, and design documentation must be complete,
informative and signed off by all parties prior to System Test proper commences.
3. A weakness in the 'phased delivery' approach is that the the high degree of interdependency in
the code means that the smallest changes can have serious effects to areas of the application
which apparently have not been changed. The assumption of the test team is that previously
delivered and tested functionality will only require regression testing to verify that it 'still' works.
I.e. testing will not be intended to discover new errors. Because of this I recommend that there be
a minimum 2 days regression testing AFTER the final fix/change has been retested. This
however, imposes a fixed time constraint on the completion of system testing which requires the
agreement of the Project Leader.
4. Automated Testing
The majority of the Regression testing will be performed using the automated test tool. However,
due to the workload required to implement (and debug) the test tool fully it is likely that the
return will only be maximised after the 3rd time running the regression test suite for each release.
The other major uses of the test tool are for (1) Load Testing, (2) Multi-User Testing, and (3)
Repetitive data entry.
9.2. Assumptions
. Formal Signoff
This document must be formally approved before System Test can commence. The following
people will be required to sign off :-
Group Signatures:
Project Manager Byron Ruthlenn
SQA Colm Jones
Testing Team Dion Hais
Development Team Erwin Smith
11. APPENDICES
11.1. Purpose of Error Review Team.
Ensure maximum efficiency of the development and system testing teams for the release of the
new office software through close co-operation of all involved parties.
AOB
1. An "A" bug is a either a showstopper or of such importance as to radically affect the
functionality of the system i.e. :
1. The Test Controller will refer any major error/anomaly to either Devopment Team Leader or
designated representative on the development team before raising a formal error record. This
has several advantages :-
- it prevents the testers trying to proceed beyond 'showstoppers'
- it puts the developer on immediate notice of the problem
- it allows the developer to put on any traces that might be necessary to track down the error.
2. All bugs raised will be on the correct Error form, and contain all relevant data.
3. These errors will be logged on the day they occur with a status of 'RAISED'
4. There will be a daily 'System Test Support Group' meeting to discuss, prioritise and agree all
logged errors.
During this meeting some errors may be dropped, identified as duplicates, passed to
programmer, etc.
5. The Error Log will be updated with the status of all errors after this meeting. e.g. with pgmr,
dropped, duplicate.
6. Once errors have been fixed and 'rebundelled' for a release the paper forms must be passed to
the Test Controller and he will change their status to 'Fixed to be retested'
7. Once the error has been retested and proven to be corrected the status will be changed to
'Closed'
8. Regular status reports will be produced from the Error system, for use in the Error Review
Team meetings.
(ii) EFFORT.
(iii) VOLUME.
(iv) QUALITY.
- No. of Tests Passed First Time
- Percentage of Tests Passed First Time
- No. of Error's Raised During Regression Testing
- No. of Error's Generated as a Result of Incorrect Fixes
(v) TURNAROUND.