Sunteți pe pagina 1din 309

Systems Development Life Cycle

Table of Contents
CHAPTER 1: INTRODUCTION 1.0 Background 1.1 Purpose, Scope, and Applicability 1.1.1 Purpose 1.1.2 Scope 1.1.3 Applicability 1.2 Introduction to System Development Life Cycle (SDLC) 1.2.1 Initiation Phase 1.2.2 System Concept Development Phase 1.2.3 Planning Phase 1.2.4 Requirements Analysis Phase 1.2.5 Design Phase 1.2.6 Development Phase 1.2.7 Integration and Test Phase 1.2.8 Implementation Phase 1.2.9 Operations and Maintenance Phase 1.2.10 Disposition Phase 1.3 Controls/Assumptions 1.4 Documentation CHAPTER 2: STRATEGIC PLANNING FOR INFORMATION SYSTEMS 2.0 Strategic Planning 2.1 Information Technology Investment Management (ITIM) 2.2 Enterprise Architecture 2.3 Performance Measurement 2.4 Business Process Reengineering 2.5 Systems Security CHAPTER 3: INITIATION PHASE 3.0 Objective 3.1 Tasks and Activities 3.1.1 Identify the Opportunity to Improve Business Functions 3.1.2 Identify a Project Sponsor 3.1.3 Form a Project Organization 3.1.4 Document the Phase Efforts 3.1.5 Review and Approval to Proceed 3.2 Roles and Responsibilities 3.3 Deliverables 3.3.1 Concept Proposal 3.4 Issues for Consideration 3.5 Phase Review Activity

CHAPTER 4.0 4.1 4.1.1 4.1.2 4.1.3 4.1.4 4.1.5 4.1.6 4.1.7 4.2 4.3 4.3.1 4.3.2 4.3.3 4.3.4 4.4 4.4.1 4.4.2 4.4.3 4.5

DEVELOPMENT Objective Tasks and Activities Study and Analyze the Business Need Plan the Project Form the Project Acquisition Strategy Study and Analyze the Risks Obtain Project Funding, Staff and Resources Document the Phase Efforts Review and Approval to Proceed Roles and Responsibilities Deliverables System Boundary Document. Cost Benefit Analysis. Feasibility Study. Risk Management Plan. Issues for Consideration. ADP Position Sensitivity Analysis. Identification of Sensitive Systems. Project Continuation Decisions. Phase Review Activity.

4:

SYSTEM

CONCEPT

CHAPTER 5: PLANNING PHASE 5.0 OBJECTIVE. 5.1 Tasks and Activities. 5.1.1 Refine Acquisition Strategy in System Boundary Document. 5.1.2 Analyze Project Schedule. 5.1.3 Create Internal Processes. 5.1.4 Staff Project Office. 5.1.5 Establish Agreements with Stakeholders. 5.1.6 Develop the Project Management Plan. 5.1.7 Develop the Systems Engineering Management Plan. 5.1.8 Review Feasibility of System Alternatives. 5.1.9 Study and Analyze Security Implications. 5.1.10 Plan the Solicitation, Selection and Award. 5.1.11 Develop the CONOPS. 5.1.12 Revise Previous Documentation. 5.2 Roles and Responsibilities. 5.3 Deliverables. 5.3.1 Acquisition Plan. 5.3.2 Configuration Management Plan. 5.3.3 Quality Assurance Plan. 5.3.4.Concept of Operations. 5.3.5 System Security Plan. 5.3.6 Project Management Plan. 5.3.7 Validation and Verification Plan. 5.3.8 Systems Engineering Management Plan. 5.4 Issues for Consideration. 5.4.1 Audit Trails.

5.4.2 5.5 CHAPTER 6.0 6.1 6.1.1 6.1.2 6.1.3 6.1.4 6.1.5 6.1.6 6.2 6.3 6.3.1 6.3.2 6.3.3 6.3.4 6.4 6.5 CHAPTER 7.0 7.1 7.1.1 7.1.2 7.1.3 7.1.4 7.1.5 7.1.6 7.1.7 7.1.8 7.1.9 7.1.10 7.2 7.3 7.3.1 7.3.2 7.3.3 7.3.4 7.3.5 7.3.6 7.3.7 7.3.8 7.4 7.4.1 7.4.2 7.5

Access Phase 6:

Based

on

Need Review

to

Know. Activity.

PHASE Objective. Tasks and Activities. Analyze and Document Requirements. Develop the Test Criteria and Plans. Develop an Interface Control Document. Review and Assess FOIAbrA Requirements. Conduct Functional Review. Revise Previous Documentation. Roles and Responsibilities. Deliverables. Functional Requirements Document. Test and Evaluation Master Plan. Interface Control Document. Privacy Act Noticebrrivacy Impact Assessment. Issues for Consideration. Phase Review Activity. PHASE Objective. Tasks and Activities. Establish the Application Environment. Design the Application. Develop Maintenance Manual. Develop Operations Manual. Conduct Preliminary Design Review. Design Human Performance Support (Training). Design Conversion/Migration/Transition Strategies. Conduct Security Risk Assessment. Conduct Critical Design Review. Revise Previous Documentation. Roles and Responsibilities. Deliverables. Security Risk Assessment. Conversion Plan. System Design Document. Implementation Plan. Maintenance Manual. Operations Manual or System Administration Manual. Training Plan. User Manual. Issues for Consideration. Project Decision Issues. Security Issues. Phase Review Activity. 7: DESIGN

REQUIREMENTS

ANALYSIS

CHAPTER 8.0 8.1 8.1.1 8.1.2 8.1.3 8.1.4 8.1.5 8.1.6 8.1.7 8.1.8 8.2 8.3 8.3.1 8.3.2 8.3.3 8.3.4 8.3.5 8.4 8.5 CHAPTER 9.0 9.1 9.1.1 9.1.2 9.1.3 9.1.4 9.1.5 9.1.6 9.2 9.3 9.3.1 9.3.2 9.3.3 9.3.4 9.4 9.5 CHAPTER 10.0 10.1 10.1.1 10.1.2 10.1.3 10.1.4 10.1.5 10.2 10.3

PHASE Objective. Tasks and Activities. Code and Test Software . Integrate Software. Conduct Software Qualification Testing. Integrate System . Conduct System Qualification Testing. Install Software. Document Software Acceptance . Revise Previous Documentation. Roles and Responsibilities. Deliverables. Contingency Plan. Software Development Document. System (Application) Software. Test Files/Data. Integration Document. Issues for Consideration. Phase Review Activity. PHASE Objective. Tasks and Activities. Establish the Test Environment. Conduct Integration Tests. Conduct Subsystem/System Testing. Conduct Security Testing. Conduct Acceptance Testing. Revise Previous Documentation. Roles and Responsibilities. Deliverables. Test Analysis Report. Test Analysis Approval Determination. Test Problem Report. IT Systems Security Certification & Accreditation. Issues for Consideration. Phase Review Activity. 10: Tasks Notify Users Execute Perform Data Conduct Roles of IMPLEMENTATION and New Training Entry or PHASE Objective. Activities. Implementation. Plan. Conversion. System. Review Responsibilities. Deliverables. 9: INTEGRATION AND TEST

8:

DEVELOPMENT

Install Post-Implementation and

10.3.1 10.3.2 10.3.3 10.3.4 10.4 10.5 CHAPTER 11.0 11.1 11.1.1 11.1.2 11.1.3 11.1.4 11.1.5 11.2 11.3 11.3.1 11.3.2 11.4 11.5 CHAPTER 12.0 12.1 12.1.1 12.1.2 12.1.3 12.1.4 12.1.5 12.1.6 12.1.7 12.2 12.3 12.3.1 12.3.2 12.3.3 12.4 12.5 11:

Delivered Change Implementation Version Description Post-Implementation Issues for Phase Review OPERATIONS AND

System. Notice. Document. Review. Consideration. Activity.

PHASE Objectives. Tasks and Activities. Identify Systems Operations. Maintain Data/Software Administration. Identify Problem and Modification Process. Maintain System/Software Maintenance. Revise Previous Documentation. Roles and Responsibilities. Deliverables. In-Process Review Report. User Satisfaction Review. Issues for Consideration. Phase Review Activity. 12: DISPOSITION

MAINTENANCE

PHASE Objective. Tasks and Activities. Prepare Disposition Plan. Archive or Transfer Data. Archive or Transfer Software Components. Archive Life Cycle Deliverables. End the System in an Orderly Manner. Dispose of Equipment. Prepare Post-Termination Review Report. Roles and Responsibilities. Deliverables. Disposition Plan. Post-Termination Review Report. Archived System. Issues for Consideration. Phase Review Activity.

CHAPTER 13: ALTERNATIVE SDLC WORK PATTERNS 13.0 Objective. 13.1 Standard SDLC Methodology (Full Sequential Work Pattern). 13.2 Alternative Work Patterns. 13.3 Work Pattern Descriptions and Exhibits. 13.3.1 Reduced Effort (Small Application Development) Work Pattern. 13.3.2 Rapid Application Development Work Pattern. 13.3.3 Pilot Development Work Pattern. 13.3.4 Managed Evolutionary Development Work Pattern. 13.3.5 O&M Small-Scale Enhancement Work Pattern.

13.3.6 13.3.7 Appendix Appendix

O&M Procurement of A B

Project Work Commercial-Off-the-Shelf (COTS) -

Pattern. Product. Glossary Acronyms

Appendix C Document Content guidelines/templates Appendix C-1 Concept Proposal. Appendix C-2 System Boundary Document Appendix C-3 Cost-Benefit Analysis Appendix C-4 Feasibility Study Appendix C-5 Risk Management Plan Appendix C-6 Acquisition Plan Appendix C-7 Configuration Management Plan Appendix C-8 Quality Assurance Plan Appendix C-9 Concept of Operations Appendix C-10 System Security Plan Appendix C-11 Project Management Plan Appendix C-12 Verification and Validation Plan. Appendix C-13 Systems Engineering Management Plan. Appendix C-14 Functional Requirements Document. Appendix C-15 Test and Evaluation Master Plan. Appendix C-16 Interface Control Document. Appendix C-17 Security Risk Assessment. Appendix C-18 Conversion Plan. Appendix C-19 System Design Document. Appendix C-20 Implementation Plan. Appendix C-21 Maintenance Manual. Appendix C-22 Operations Manual. Appendix C-23 Systems Administration Manual. Appendix C-24 Training Plan. Appendix C-25 User Manual. Appendix C-26 Contingency Plan. Appendix C-27 Software Development Document. Appendix C-28 Integration Document. Appendix C-29 Test Analysis Report. Appendix C-30 Test Analysis Approval Determination. Appendix C-31 Test Problem Report. Appendix C-32 Change Implementation Notice. Appendix C-33 Version Description Document. Appendix C-34 Post-Implementation Review. Appendix C-35 In-Process Review Report. Appendix C-36 User Satisfaction Review Report. Appendix C-37 Disposition Plan. Appendix C-38 Post-Termination Review Report.

SDLC Objectives This guide was developed to disseminate proven practices to system developers, project managers, program/account analysts and system owners/users. The specific objectives expected include the following:

To reduce the risk of project failure To consider system and data requirements throughout the entire life of the system To identify technical and management issues early To disclose all life cycle costs to guide business decisions To foster realistic expectations of what the systems will and will not provide To provide information to better balance programmatic, technical, management, and cost aspects of proposed system development or modification To encourage periodic evaluations to identify systems that are no longer effective To measure progress and status for effective corrective action To support effective resource management and budget planning To consider meeting current and future business requirements

Key Principles This guidance document refines traditional information system life cycle management approaches to reflect the principles outlined in the following subsections. These are the foundations for life cycle management. Life Cycle Management Should be used to Ensure a Structured Approach to Information Systems Development, Maintenance, and Operation This SDLC describes an overall structured approach to information management. Primary emphasis is placed on the information and systems decisions to be made and the proper timing of decisions. The manual provides a flexible framework for approaching a variety of systems projects. The framework enables system developers, project managers, program/account analysts, and system owners/users to combine activities, processes, and products, as appropriate, and to select the tools and methodologies best suited to the unique needs of each project. Support the use of an Integrated Product Team The establishment of an Integrated Product Team (IPT) can aid in the success of a project. An IPT is a multidisciplinary group of people who support the Project Manager in the planning, execution, delivery and implementation of life cycle decisions for the project. The IPT is composed of qualified empowered individuals from all appropriate functional disciplines that have a stake in the success of the project. Working together in a proactive, open communication, team oriented environment can aid in building a successful project and providing decision makers with the necessary information to make the right decisions at the right time.

Each System Project must have a Program Sponsor To help ensure effective planning, management, and commitment to information systems, each project must have a clearly identified program sponsor. The program sponsor serves in a leadership role, providing guidance to the project team and securing, from senior management, the required reviews and approvals at specific points in the life cycle. An approval from senior management is required after the completion of the first seven of the SDLC phases, annually during Operations and Maintenance Phase and six-months after the Disposition Phase. Senior management approval authority may be varied based on dollar value, visibility level, congressional interests or a combination of these. The program sponsor is responsible for identifying who will be responsible for formally accepting the delivered system at the end of the Implementation Phase. A Single Project Manager must be Selected for Each System Project The Project Manager has responsibility for the success of the project and works through a project team and other supporting organization structures, such as working groups or user groups, to accomplish the objectives of the project. Regardless of organizational affiliation, the Project Manager is accountable and responsible for ensuring that project activities and decisions consider the needs of all organizations that will be affected by the system. The Project Manager develops a project charter to define and clearly identify the lines of authority between and within the agencys executive management, program sponsor, (user/customer), and developer for purposes of management and oversight. A Comprehensive Project Management Plan is Required for Each System Project The project management plan is a pivotal element in the successful solution of an information management requirement. The project management plan must describe how each life cycle phase will be accomplished to suit the specific characteristics of the project. The project management plan is a vehicle for documenting the project scope, tasks, schedule, allocated resources, and interrelationships with other projects. The plan is used to provide direction to the many activities of the life cycle and must be refined and expanded throughout the life cycle. Specific Individuals Must be Assigned to Perform Key Roles Throughout the Life Cycle Certain roles are considered vital to a successful system project and at least one individual must be designated as responsible for each key role. Assignments may be made on a full- or part-time basis as appropriate. Key roles include program/functional management, quality assurance, security, telecommunications management, data administration, database administration, logistics, financial, systems engineering, test and evaluation, contracts management, and configuration management. For most projects, more than one individual should represent the actual or potential users of the system (that is, program staff) and should be designated by the Program Manager of the program and organization.

Obtaining the Participation of Skilled Individuals is Vital to the Success of the System Project The skills of the individuals participating in a system project are the single most significant factor for ensuring the success of the project. The SDLC manual is not intended as a substitute for information management skills or experience. While many of the skills required for a system project are discussed in later sections, the required skill combination will vary according to the project. All individuals participating in a system development project are encouraged to obtain assistance from experienced information management professionals. Documentation of Activity Results and Decisions for Each Phase of the Life Cycle are Essential Effective communication and coordination of activities throughout the life cycle depend on the complete and accurate documentation of decisions and the events leading up to them. Undocumented or poorly documented events and decisions can cause significant confusion or wasted efforts and can intensify the effect of turnover of project management staff. Activities should not be considered complete, nor decisions made, until there is tangible documentation of the activity or decision. For some large projects, advancement to the next phase cannot commence until the required reviews are completed and approved by senior management. Data Management is Required Throughout the Life Cycle Accurate data is critical to support organizational missions. The large volumes of data handled by systems, as well as the increasing trend toward interfacing and sharing data across systems and programs, underscores the importance of data quality. Systems life cycle activities stress the need for clear definition of data, the design and the implementation of automated and manual processes that ensure effective data management. Each System Project Must Undergo Formal Acceptance The program sponsor identifies the representative who will be responsible for formally accepting the delivered system at the end of the Implementation Phase. The system is formally accepted by the program sponsor by signing an Implementation Phase Review and Approval Certification along with the developer. Consultation With Oversight Organizations Aids in the Success of a System Project A number of oversight bodies, as well as external organizations, have responsibility for ensuring that information systems activities are performed in accordance with standards and available resources are used effectively. Each project team should work with these organizations, as appropriate, and encourage their participation and support as early as possible in the life cycle to identify and resolve potential issues or sensitivities and thereby avoid major disruptions to the project. Assume all documentation is subject to review by oversight activities.

A System Project may not Proceed Until Resource Availability is Assured Beginning with the approval of the project, the continuation of a system is contingent on a clear commitment from the program sponsor. This commitment is embodied in the assurance that the necessary resources will be available, not only for the next activity, but as required for the remainder of the life cycle.

CHAPTER INTRODUCTION
1.0 1.1 BACKGROUND PURPOSE, 1.1.1 1.1.2 1.1.3 Applicability SCOPE, AND

APPLICABILITY Purpose Scope

1.2

INTRODUCTION TO SYSTEM DEVELOPMENT LIFE CYCLE (SDLC) 1.2.1 Initiation Phase 1.2.2 System Concept Development Phase 1.2.3 Planning Phase 1.2.4 Requirements Analysis Phase 1.2.5 Design Phase 1.2.6 Development Phase 1.2.7 Integration and Test Phase 1.2.8 Implementation Phase 1.2.9 Operations and Maintenance Phase 1.2.10 Disposition Phase CONTROLS/ASSUMPTIONS DOCUMENTATION

1.3 1.4

1.0

BACKGROUND

The IT COMPANIES spends millions of dollars each year on the acquisition, design, development, implementation, and maintenance of information systems vital to mission programs and administrative functions. The need for safe, secure, and reliable system solutions is heightened by the increasing dependence on computer systems and technology to provide services and develop products, administer daily activities, and perform short- and long-term management functions. There is also a need to ensure privacy and security when developing information systems, to establish uniform privacy and protection practices, and to develop acceptable implementation strategies for these practices. These companies need a systematic and uniform methodology for information systems development. Using this SDLC will ensure that systems developed by the Department meet IT mission objectives; are compliant with the current and planned Information Technology Architecture (ITA); and are easy to maintain and costeffective to enhance. Sound life cycle management practices include planning and evaluation in each phase of the information system life cycle. The appropriate level of planning and evaluation is commensurate with the cost of the system, the stability and maturity of the technology under consideration, how well defined the user requirements are, the level of stability of program and user requirements and security considerations. 1.1 1.1.1 PURPOSE, SCOPE, AND APPLICABILITY Purpose

This SDLC methodology establishes procedures, practices, and guidelines governing the initiation, concept development, planning, requirements analysis, design, development, integration and test, implementation, and operations, maintenance and disposition of information systems (IS). It should be used in conjunction with existing policy and guidelines for acquisition and procurement, as these areas are not discussed in the SDLC. 1.1.2 Scope

This methodology should be used for all information systems and applications. It is applicable across all information technology (IT) environments (e.g., mainframe, client, server) and applies to contractually developed as well as in-house developed applications. The specific participants in the life cycle process, and the necessary reviews and approvals, vary from project to project. The guidance provided in this document should be tailored to the individual project based on cost, complexity, and criticality to the agencys mission. See Chapter 13 for Alternate SDLC Work Patterns if a formal SDLC is not feasible. Similarly, the documents called for in the guidance and shown in Appendix C should be tailored based on the scope of the effort and the needs of the decision authorities.

1.1.3

Applicability

This methodology can be applied to all Offices, Boards, Divisions and Bureaus (OBDB) who are responsible for information systems development. All Project Managers and development teams involved in system development projects represent the primary audience for the DJ SDLC, version 2.0. 1.2 INTRODUCTION TO SDLC

The SDLC includes ten phases during which defined IT work products are created or modified. The tenth phase occurs when the system is disposed of and the task performed is either eliminated or transferred to other systems. The tasks and work products for each phase are described in subsequent chapters. Not every project will require that the phases be sequentially executed. However, the phases are interdependent. Depending upon the size and complexity of the project, phases may be combined or may overlap. See Figure 1-1. Figure 1-1

The SDLC encompasses ten phases: 1.2.1 Initiation Phase

The initiation of a system (or project) begins when a business need or opportunity is identified. A Project Manager should be appointed to manage the project. This business need is documented in a Concept Proposal. After the Concept Proposal is approved, the System Concept Development Phase begins.

1.2.2

System Concept Development Phase

Once a business need is approved, the approaches for accomplishing the concept are reviewed for feasibility and appropriateness. The Systems Boundary Document identifies the scope of the system and requires Senior Official approval and funding before beginning the Planning Phase. 1.2.3 Planning Phase

The concept is further developed to describe how the business will operate once the approved system is implemented, and to assess how the system will impact employee and customer privacy. To ensure the products and /or services provide the required capability on-time and within budget, project resources, activities, schedules, tools, and reviews are defined. Additionally, security certification and accreditation activities begin with the identification of system security requirements and the completion of a high level vulnerability assessment. 1.2.4 Requirements Analysis Phase

Functional user requirements are formally defined and delineate the requirements in terms of data, system performance, security, and maintainability requirements for the system. All requirements are defined to a level of detail sufficient for systems design to proceed. All requirements need to be measurable and testable and relate to the business need or opportunity identified in the Initiation Phase. 1.2.5 Design Phase

The physical characteristics of the system are designed during this phase. The operating environment is established, major subsystems and their inputs and outputs are defined, and processes are allocated to resources. Everything requiring user input or approval must be documented and reviewed by the user. The physical characteristics of the system are specified and a detailed design is prepared. Subsystems identified during design are used to create a detailed structure of the system. Each subsystem is partitioned into one or more design units or modules. Detailed logic specifications are prepared for each software module. 1.2.6 Development Phase

The detailed specifications produced during the design phase are translated into hardware, communications, and executable software. Software shall be unit tested, integrated, and retested in a systematic manner. Hardware is assembled and tested. 1.2.7 Integration and Test Phase

The various components of the system are integrated and systematically tested. The user tests the system to ensure that the functional requirements, as defined in the functional requirements document, are satisfied by the developed or modified system. Prior to installing and operating the system in a production environment, the system must undergo certification and accreditation activities.

1.2.8

Implementation Phase

The system or system modifications are installed and made operational in a production environment. The phase is initiated after the system has been tested and accepted by the user. This phase continues until the system is operating in production in accordance with the defined user requirements. 1.2.9 Operations and Maintenance Phase

The system operation is ongoing. The system is monitored for continued performance in accordance with user requirements, and needed system modifications are incorporated. The operational system is periodically assessed through In-Process Reviews to determine how the system can be made more efficient and effective. Operations continue as long as the system can be effectively adapted to respond to an organizations needs. When modifications or changes are identified as necessary, the system may reenter the planning phase. 1.2.10 Disposition Phase

The disposition activities ensure the orderly termination of the system and preserve the vital information about the system so that some or all of the information may be reactivated in the future if necessary. Particular emphasis is given to proper preservation of the data processed by the system, so that the data is effectively migrated to another system or archived in accordance with applicable records management regulations and policies, for potential future access. 1.3 CONTROLS/ASSUMPTIONS

This SDLC calls for a series of comprehensive management controls. These include:

Life Cycle Management should be used to ensure a structured approach to information systems development and operation. Each system project must have an accountable sponsor. A single project manager must be appointed for each system project. A comprehensive project management plan is required for each system project. Data Management and security must be emphasized throughout the Life Cycle. A system project may not proceed until resource availability is assured. DOCUMENTATION

1.4

This life cycle methodology specifies which documentation shall be generated during each phase. Some documentation remains unchanged throughout the systems life cycle while others evolve continuously during the life cycle. Other documents are revised to reflect the results of analyses performed in later phases. Each of the documents produced are collected and stored in a project file. Care should be taken, however, when processes are automated. Specifically, components are encouraged to

incorporate a long-term retention and access policy for electronic processes. Be aware of legal concerns that implicate effectiveness of or impose restrictions on electronic data or records. Contact your Records Management Office for specific retention requirements and procedures. Recommended documents and their project phase are shown in Table 1. Table 1
Planning Document

Concept Proposal System Boundary Document Cost-Benefit Analysis Feasibility Study Risk Management Plan Acquisition Plan Configuration Managment Plan Quality Assurance Plan Concept of Operations System Security Plan Project Management Plan Verification and Validation Plan System Engineering Management Plan Functional Requirements Document Test and Evaluation Master Plan Interface Contraol Document Privacy Act Notice/Privacy Impact Assessment Security Risk Assessment Conversion Plan System Design Document Implementation Plan Maintenance Manual Operations Manual (System Administration Manual) Training Plan User Manual Contingency Plan Software Development Document System Software Test Files/Data Integration Document Test Analysis Report Test Analysis Approval Determination Test Problem Report IT Systems Security Certification & Accreditation Delivered System Change Implemention Notice Version Description Document Post-Implementation Review

C/F C/F * C C C R R R C C C C C C * R F R R R R R R R * R R F R R R R R R * F R R F C C C C C C C C R R F R R R R R C C C C C F F F F F F F F R F F F C/F C/F C C/F C/F C C/F C * C * * * * * * F * * * * * * * * * * * * * R F F * * * * * * R R * R R R R R R * * F F F F R F F F * * * F * * * * * * * * * * *

C R C/F * C C C C

In-Process Review Report User Satisfaction Report Disposition Plan Post-termination Review Report KEY: C=Created R-Revised F=Finalized *=Updated if needed

C C C/F C/F

CHAPTER STRATEGIC PLANNING FOR INFORMATION SYSTEMS


2.0 2.1 2.2 2.3 2.4 2.5 2.0

STRATEGIC PLANNING INFORMATION TECHNOLOGY INVESTMENT MANAGEMENT (ITIM) ENTERPRISE ARCHITECTURE PERFORMANCE MEASUREMENT BUSINESS PROCESS REENGINEERING SYSTEMS SECURITY STRATEGIC PLANNING

Strategic planning provides a framework for analyzing where the Department is and where the Department should be in the future. The agency strategic plans required by the Government Performance and Results Act (GPRA) provide the framework for implementing all other parts of this Act, and are the key part of the effort to improve performance of government programs and operations. The U.S. Department of Justice Strategic Plan guides the annual budget and performance planning. It sets the framework for measuring progress and ensuring accountability to the public. Each Bureaus Strategic Plan is mission driven and should include a vision statement which describes the work environment to accomplish the mission. The strategic plan identifies goals, objectives and strategies in support of the bureaus mission and vision. Bureau strategic plans are linked to the overall goals and direction the Attorney General has set for the Department. Strategic planning is not part of the SDLC, but determines what information systems projects get started and/or continue to receive funding. 2.1 INFORMATION TECHNOLOGY INVESTMENT MANAGEMENT (ITIM)

The ITIM process implements the Departments information technology capital planning and investment control process. The ITIM process uses the Select-ControlEvaluate methodology recommended by OMB and GAO guidance to implement the strategic and performance directives of the Clinger-Cohen Act and other statutory provisions affecting information technology investments. The process complements the SDLC process by providing fiscal oversight of system development projects and linking IT investment decisions to Strategic goals and objectives. 2.2 ENTERPRISE ARCHITECTURE

The development of information technology architectures is a requirement of the Clinger-Cohen Act. The Department is building an enterprise IT architecture which promotes the effective management and operation of IT investments and services. This enterprise architecture (EA) provides a comprehensive, integrated picture of current capabilities and relationships (i.e., the current architecture), an agreed upon blueprint for the future (i.e., the target architecture), and a strategy for transitioning from the current to the target environment. The EA describes the information needed to carry out these business functions and processes; identifies the system applications

that create or manipulate data to meet business information needs; and documents the underlying technologies (i.e., hardware, software, communications networks, and devices) that enable the generation and flow of information. The EA is an essential tool for taking a strategic approach to planning and managing IT resources and making maximum use of limited IT dollars. It ensures the alignment of IT with the Departments strategic goals so that business needs drive technology rather than the reverse; identifies redundancies, and thus potential cost savings; highlights opportunities for streamlining business processes and information flows; assists in optimizing the interdependencies and interrelationships among the programs and services of the Departments various component organizations as well as with external agencies; ensures a logical and integrated approach to adopting new technologies; promotes adherence to department-wide standards including those for systems security; and pinpoints and resolve issues of data availability, utility, quality and access. The ITIM policy and guidance uses this architecture as a key criterion for selecting a proposed investment and managing it through the life cycle. The EA processes are specifically aligned with the Select, Control and Evaluate phases of the ITIM and considered throughout the SDLC. 2.3 PERFORMANCE MEASUREMENT

Performance measurement is an essential element in developing effective systems through a strategic management process. The mission, goals, and objectives of the Department are identified in its strategic plan. Strategies are developed to identify how the Department can achieve the goals. For each goal, the Department establishes a set of performance measures. These measures enable the Department to assess how effective each of its projects are in improving Departmental operations. For the Department to make this assessment, the current performance level for each measure (performance level baseline) for the existing systems must be determined. For each project plan, as part of the cost benefit analysis, estimate the performance levels expected to be attained as a result of the planned improvements. As the projects improvements are implemented, actual results are compared with the estimated gains to determine the success of the effort. Further analysis of the results may suggest additional improvement opportunities. Performance Measurement, along with evaluation are the principle methods for determining if identified benefits are realized in the expected time frame. 2.4 BUSINESS PROCESS REENGINEERING

The primary underpinning of any new system development or initiative should be business process reengineering. Business process reengineering (BPR) involves a change in the way an organization conducts its business. BPR is the redesign of the organization, culture, and business processes using technology as an enabler to achieve quantum improvements in cost, time, service, and quality. Information technology is not the driver of BPR. Rather, it is the organizations desire to improve its processes and how the use of technology can enable some of the improvements.

BPR may not necessarily involve the use of technology. There are circumstances when all BPR will entail is an elimination of steps or the process. For BPR to attain large benefits, the use of information technology can be justified. Bureaus or agencies should consider BPR before requesting funding for a new project or system development effort. When BPR is applied to one or more related business processes, an organization can improve its products and services and reduce resource requirements. The results of a successful BPR program are increased productivity and quality improvements. BPR is not just about continuous, incremental and evolutionary productivity-enhancements. It also utilizes an approach which suggests scraping a dysfunctional process and starting from scratch to obtain larger benefits. 2.5 SYSTEMS SECURITY

The Federal Government has become increasingly reliant on IT systems to support day-to-day and critical operations/business transactions. Risks to system and data confidentiality, integrity, and availability can impact an organizations ability to execute its mission or its business strategy. To minimize the impact associated with these risks, federal IT security policy requires all IT systems to be accredited prior to being placed into operation and at least every three years thereafter, or prior to implementation of a significant change. The Department goal is to define a process which ensures that Department systems are conceived, designed, developed, acquired, implemented, and maintained according to all appropriate federal guidance and are in compliance with the appropriate laws, regulations, OMB circulars, and Department orders. The IT Systems Certification and Accreditation Standard and Implementation Guidelines provides IRM managers with a single source of information for conducting certification and accreditation and provides templates for the Systems Security Plan, Security Risk Assessment, Contingency Plan, and Certification and Accreditation memorandums. The C&A process is compliant with the SDLC and the ITIM process.

CHAPTER INITIATION PHASE


3.0 3.1 OBJECTIVE TASKS AND 3.1.1 Identify the Opportunity to 3.1.2 Identify a 3.1.3 Form a 3.1.4 Document the 3.1.5 Review and Approval to Proceed ROLES AND RESPONSIBILITIES DELIVERABLES 3.3.1 Concept Proposal ISSUES FOR CONSIDERATION PHASE REVIEW ACTIVITY OBJECTIVE

ACTIVITIES Improve Business Functions Project Sponsor Project Organization Phase Efforts

3.2 3.3 3.4 3.5 3.0

The Initiation Phase begins when management determines that it is necessary to enhance a business process through the application of information technology. The purposes of the Initiation Phase are to:

Identify and validate an opportunity to improve business accomplishments of the organization or a deficiency related to a business need, Identify significant assumptions and constraints on solutions to that need, and Recommend the exploration of alternative concepts and methods to satisfy the need.

IT projects may be initiated as a result of business process improvement activities, changes in business functions, advances in information technology, or may arise from external sources, such as public law, the general public or state/local agencies. The Project Sponsor articulates this need within the organization to initiate the project life cycle. During this phase, a Project Manager is appointed who prepares a Statement of Need or Concept Proposal. When an opportunity to improve business/mission accomplishments or to address a deficiency is identified, the Project Manager documents these opportunities in the Concept Proposal. (See figure 3-1)

Figure 3-1 3.1 TASKS AND ACTIVITIES

The following activities are performed as part of the Initiation Phase. The results of these activities are captured in the Concept Proposal. For every IT project, the agency should designate a responsible organization and assign that organization sufficient resources to execute the project. 3.1.1 Identify the Opportunity to Improve Business Functions

Identify why a business process is necessary and what business benefits can be expected by implementing this improvement. A business scenario and context must be established in which a business problem is clearly expressed in purely business terms. Provide background information at a level of detail sufficient to familiarize senior managers to the history, issues and customer service opportunities that can be realized through improvements to business processes with the potential support of IT. This background information must not offer or predetermine any specific automated solution, tool, or product. 3.1.2 Identify a Project Sponsor

The Project Sponsor is the principle authority on matters regarding the expression of business needs, the interpretation of functional requirements language, and the mediation of issues regarding the priority, scope and domain of business requirement.

3.1.3

Form (or appoint) a Project Organization

This activity involves the appointment of a project manager who carries both the responsibility and accountability for project execution. For small efforts, this may only involve assigning a project to a manager within an existing organization that already has an inherent support structure. For new, major projects, a completely new organizational element may be formed - requiring the hiring and reassignment of many technical and business specialists. Each project shall have an individual designated to lead the effort. The individual selected will have appropriate skills, experience, credibility, and availability to lead the project. Clearly defined authority and responsibility must be provided to the Project Manager. The Project Manager will work with Stakeholders to identify the scope of the proposed program, participation of the key organizations, and potential individuals who can participate in the formal reviews of the project. This decision addresses both programmatic and information management-oriented participation as well as technical interests in the project that my be knows at this time. In view of the nature and scope of the proposed program, the key individuals and oversight committee members who will become the approval authorities for the project will be identified. 3.1.4 Document the Phase Efforts

The results of the phase efforts are documented in the Concept Proposal. 3.1.5 Review and Approval to Proceed

The approval of the Concept Proposal identifies the end of the Initiation Phase. Approval should be annotated on the Concept Proposal by the Program Sponsor and the Chief Information Officer (CIO). 3.2

ROLES AND RESPONSIBILITIES Sponsor. The Sponsor is the senior spokesperson for the project, and is responsible for ensuring that the needs and accomplishments within the business area are widely known and understood. The Sponsor is also responsible for ensuring that adequate resources to address their business area needs are made available in a timely manner. Project Manager. The appointed project manager is charged with leading the efforts to ensure that all business aspects of the process improvement effort are identified in the Concept Proposal. This includes establishing detailed project plans and schedules.

3.3

DELIVERABLES

The following deliverables shall be initiated during the Initiation Phase:

3.3.1 Concept Proposal - This is the need or opportunity to improve business


functions. It identifies where strategic goals are not being met or mission performance needs to be improved. Appendix C-1 provides a template for the Concept Proposal. 3.4 ISSUES FOR CONSIDERATION

In this phase, it is important to state the needs or opportunities in business terms. Avoid identifying a specific product or vendor as the solution. The Concept Proposal should not be more than 2-5 pages in length. 3.5 PHASE REVIEW ACTIVITY

At the end of this phase, the Concept Proposal is approved before proceeding to the next phase. The Concept Proposal should convey that this project is a good investment and identify any potential impact on the infrastructure/architecture.

CHAPTER SYSTEM CONCEPT DEVELOPMENT PHASE


4.0 4.1 OBJECTIVE

TASKS AND ACTIVITIES 4.1.1 Study and Analyze the Business Need 4.1.2 Plan the Project 4.1.3 Form the Project Acquisition Strategy 4.1.4 Study and Analyze the Risks 4.1.5 Obtain Project Funding, Staff and Resources 4.1.6 Document the Phase Efforts 4.1.7 Review and Approval to Proceed ROLES AND RESPONSIBILITIES DELIVERABLES 4.3.1 System 4.3.2 Cost 4.3.3 4.3.4 Risk Management Plan ISSUES FOR CONSIDERATION 4.4.1 ADP Position 4.4.2 Identification of 4.4.3 Project Continuation Decisions Sensitivity Sensitive Analysis Systems Boundary Benefit Feasibility Document Analysis Study

4.2 4.3

4.4

4.5 4.0

PHASE REVIEW ACTIVITY OBJECTIVE

System Concept Development begins when the Concept Proposal has been formally approved and requires study and analysis that may lead to system development activities. The review and approval of the Concept Proposal begins the formal studies and analysis of the need in the System Concept Development Phase and begins the life cycle of an identifiable project. 4.1 TASKS AND ACTIVITIES

The following activities are performed as part of the System Concept Development Phase. The results of these activities are captured in the four phase documents and their underlying institutional processes and procedures (See Figure 4-1).

4.1.1

Study and Analyze the Business Need

The project team, supplemented by enterprise architecture or other technical experts, if needed, should analyze all feasible technical, business process, and commercial alternatives to meeting the business need. These alternatives should then be analyzed from a life cycle cost perspective. The results of these studies should show a range of feasible alternatives based on life cycle cost, technical capability, and scheduled availability. Typically, these studies should narrow the system technical approaches to only a few potential, desirable solutions that should proceed into the subsequent life cycle phases. 4.1.2 Plan the Project

The project team should develop high-level (baseline) schedule, cost, and performance measures which are summarized in the System Boundary Document. These high-level estimates are further refined in subsequent phases. 4.1.3 Form the Project Acquisition Strategy

The acquisition strategy should be included in the SBD. The project team should determine the strategies to be used during the remainder of the project concurrently with the development of the CBA and Feasibility Study. Will the work be accomplished with available staff or do contractors need to be hired? Discuss available and projected technologies, such as reuse or Commercial Off-the-Shelf and potential contract types.

4.1.4

Study and Analyze the Risks

Identify any programmatic or technical risks. The risks associated with further development should also be studied. The results of these assessments should be summarized in the SBD and documented in the Risk Management Plan and CBA. 4.1.5 Obtain Project Funding, Staff and Resources

Estimate, justify, submit requests for, and obtain resources to execute the project in the format of the Capital Asset Plan and Justification, Exhibit 300. An example and instructions are included in the ITIM policy and guidance 4.1.6 Document the Phase Efforts

The results of the phase efforts are documented in the System Boundary Document, Cost Benefit Analysis, Feasibility Study, and Risk Management Plan. 4.1.7 Review and Approval to Proceed

The results of the phase efforts are presented to project stakeholders and decision makers together with a recommendation to (1) proceed into the next life-cycle phase, (2) continue additional conceptual phase activities, or (3) terminate the project. The emphasis of the review should be on (1) the successful accomplishment of the phase objectives, (2) the plans for the next life-cycle phase, and (3) the risks associated with moving into the next life-cycle phase. The review also addresses the availability of resources to execute the subsequent life-cycle phases. The results of the review should be documented reflecting the decision on the recommended action. 4.2

ROLES AND RESPONSIBILITIES Sponsor. The sponsor should provide direction and sufficient study resources to commence the System Concept Development Phase. Project Manager. The appointed project manager is charged with leading the efforts to accomplish the System Concept Development Phase tasks discussed above. The Project Manager is also responsible for reviewing the deliverables for accuracy, approving deliverables and providing status reports to management. Component Chief Information Officer (CIO) and Executive Review Board (ERB). The CIO/ERB approve the Systems Boundary Document. Approval allows the project to enter the Planning Phase. DELIVERABLES

4.3

The following deliverables shall be initiated during the System Concept Development Phase:

4.3.1 System Boundary Document - Identifies the scope of a system (or


capability). It should contain the high level requirements, benefits, business assumptions, and program costs and schedules. It records management decisions on the envisioned system early in its development and provides guidance on its achievement. Appendix C-2 provides a template for the Systems Boundary Document.

4.3.2 Cost-Benefit Analysis - Provides cost or benefit information for analyzing


and evaluating alternative solutions to a problem and for making decisions about initiating, as well as continuing, the development of information technology systems. The analysis should clearly indicate the cost to conform to the architectural standards in the Technical Reference Model (TRM). Appendix C-3 provides a template for the Cost-Benefit Analysis.

4.3.3 Feasibility Study - Provides an overview of a business requirement or


opportunity and determines if feasible solutions exist before full life-cycle resources are committed. Appendix C-4 provides a template for the Feasibility Study.

4.3.4 Risk Management Plan - Identifies project risks and specifies the plans to
reduce or mitigate the risks. Appendix C-5 provides a template for the Risk Management Plan. 4.4 ISSUES FOR CONSIDERATION

After the SBD is approved and a recommendation is accepted by the program and/or executive management, the system project planning begins. A number of project continuation and project approach decisions are taken by the Project Manager. 4.4.1 ADP Position Sensitivity Analysis

All projects must ensure that all personnel are cleared to the appropriate level before performing work on sensitive systems. Automated Data Processing (ADP) position designation analysis applies to all personnel, including contract support personnel who are nominated to fill an ADP position. ADP positions are those that require access to ADP systems or require work on management, design, development, operation, or maintenance of automated information systems. The sensitivity analysis should be conducted only to determine an individuals eligibility or continued eligibility for access to ADP systems or to unclassified sensitive information. Such an analysis is not to be construed as the sole determination of eligibility. 4.4.2 Identification of Sensitive Systems

Public Law 100-235, the Computer security Act of 1987, requires Federal agencies to identify systems that contain sensitive information. In general, a sensitive system is a computer system that processes, stores, or transmits sensitive-but-unclassified (SBU) data. SBU data are any information that the loss, misuse, or unauthorized access to, or modification of, could adversely affect the national interest, the conduct of programs, or the privacy to which individuals are entitled under the Privacy Act. Guidelines for

the identification of sensitive systems can be found with the IMSS. These procedures will help determine the type of sensitivity level to the data that will be processed, stored, and transmitted by the new or changed system. 4.4.3 Project Continuation Decisions

The feasibility study and CBA confirm that the defined information management concept is significant enough to warrant an IT project with life-cycle management activities. The feasibility study should confirm that the information management need or opportunity is beyond the capabilities of existing systems and that developing a new system is a promising approach. The CBA confirms that the projected benefits of the proposed approach justify the projected resources required. The funding, personnel, and other resources shall be made available to proceed with the Planning Phase. 4.5 PHASE REVIEW ACTIVITY

The System Concept Development Review shall by performed at the end of this phase. The review ensures that the goals and objectives of the system are identified and that the feasibility of the system is established. Products of the System Concept Development Phase are reviewed including the budget, risk, and user requirements. This review is organized, planned, and led by the Program Manager and/or representative.

CHAPTER PLANNING PHASE


5.0 5.1 OBJECTIVE

TASKS AND ACTIVITIES 5.1.1 Refine Acquisition Strategy in System Boundary Document 5.1.2 Analyze Project Schedule 5.1.3 Create Internal Processes 5.1.4 Staff Project Office 5.1.5 Establish Agreements with Stakeholders 5.1.6 Develop the Project Management Plan 5.1.7 Develop the Systems Engineering Management Plan 5.1.8 Review Feasibility of System Alternatives 5.1.9 Study and Analyze Security Implications 5.1.10 Plan the Solicitation, Selection and Award 5.1.11 Develop the CONOPS 5.1.12 Revise Previous Documentation ROLES AND RESPONSIBILITIES DELIVERABLES 5.3.1 Acquisition 5.3.2 Configuration Management 5.3.3 Quality Assurance 5.3.4 Concept of 5.3.5 System Security 5.3.6 Project Management 5.3.7 Validation & Verification 5.3.8 Systems Engineering Management Plan ISSUES FOR 5.4.1 Audit 5.4.2 Access Based on Need to Know PHASE REVIEW ACTIVITY OBJECTIVE Trails Plan Plan Plan Operations Plan Plan Plan

5.2 5.3

5.4

CONSIDERATION

5.5 5.0

Many of the plans essential to the success of the entire project are created in this phase; the created plans are then reviewed and updated throughout the remaining SDLC phases. In the Planning Phase, the concept is further developed to describe how the business will operate once the approved system is implemented and to assess how the system will impact employee and customer privacy. To ensure the products and/or services provide the required capability on-time and within budget, project resources, activities, schedules, tools, and reviews are defined. Additionally, security certification and accreditation activities begin with identification of system security requirements and the completion of a high-level vulnerability assessment.

5.1

TASKS AND ACTIVITIES

The following tasks are performed as part of the Planning Phase. The results of these activities are captured in various project plans and solicitation documents. 5.1.1 Refine Acquisition Strategy in System Boundary Document

Refine the role of system development contractors during the subsequent phases. For example, one strategy option would include active participation of system contractors in the Requirements Analysis Phase. In this case, the Planning Phase must include complete planning, solicitation preparation, and source selection of the participating contractors (awarding the actual contract may be the first activity of the next phase). If contractors will be used to complete the required documents, up-front acquisition planning is essential. 5.1.2 Analyze Project Schedule

Analyze and refine the project schedule, taking into account risks and resource availability. Develop a detailed schedule for the Requirements Analysis Phase and subsequent phases. 5.1.3 Create Internal Processes

Create, gather, adapt, and/or adopt the internal management, engineering, business management, and contract management internal processes that will be used by the project office for all subsequent life-cycle phases. This could result in the establishment of teams or working groups for specific tasks, (e.g., quality assurance, configuration management, change control). Plan, articulate, and gain approval for the resulting processes. 5.1.4 Staff Project Office

Further staff the project office with needed skills across the broad range of technical and business disciplines. Select Technical Review Board members and document roles and responsibilities. If needed, solicit and award support contracts to provide needed non-personal services that are not available through agency resources. 5.1.5 Establish Agreements with Stakeholders

Establish relationships and agreements with internal and external organizations that will be involved with the project. These organizations may include agency and oversight offices, agency personnel offices, agency finance offices, internal and external audit organizations, and agency resource providers (people, space, office equipment, communications, etc). 5.1.6 Develop the Project Management Plan

Plan, articulate and gain approval of the strategy to execute the management aspects of the project (Project Management Plan). Develop a detailed project work breakdown structure.

5.1.7

Develop the Systems Engineering Management Plan

Plan, articulate, and gain approval of the strategy to execute the technical management aspects of the project (SEMP). Develop a detailed system work breakdown structure. 5.1.8 Review Feasibility of System Alternatives

Review and validate the feasibility of the system alternatives developed during the previous phase (CBA, Feasibility Study). Confirm the continued validity of the need (SBD). 5.1.9 Study and Analyze Security Implications

Study and analyze the security implications of the technical alternatives and ensure the alternatives address all aspects or constraints imposed by security requirements (System Security Plan). 5.1.10 Plan the Solicitation, Selection and Award

During this phase or subsequent phases, as required by the Federal Acquisition Regulation (FAR), plan the solicitation, selection and award of contracted efforts based on the selected strategies in the SBD. Obtain approvals to contract from appropriate authorities (Acquisition Plan). As appropriate, execute the solicitation and selection of support and system contractors for the subsequent phases. 5.1.11 Develop the CONOPS

Based on the system alternatives and with inputs from the end-user community, develop the concepts of how the system will be used, operated, and maintained. This is the Concept of Operations. 5.1.12 Revise Previous Documentation

Review previous phase documents and update if necessary. 5.2

ROLES AND RESPONSIBILITIES Project Manager. The project manager is responsible and accountable for the successful execution of the Planning Phase. The project manager is responsible for leading the team that accomplishes the tasks shown above. The project manager is also responsible for reviewing deliverables for accuracy, approving deliverables, and providing status reports to management. Project Team. The project team members (regardless of the organization of permanent assignment) are responsible for accomplishing assigned tasks as directed by the project manager. Contracting Officer. The contracting officer is responsible and accountable for the procurement activities and signs contract awards. Oversight Activities. Agency oversight activities, including the IRM office, provide advice and counsel to the project manager on the conduct and

requirements of the planning effort. Additionally, oversight activities provide information, judgements, and recommendations to the agency decision makers during project reviews and in support of project decision milestones.

Chief Information Officer/Executive Review Board. At an appropriate level within the agency or, an individual should be designated as the project decision authority (may or may not be the same individual designated as the sponsor in the previous phase). This individual should be charged with assessing: (1) the completeness of the planning phase activities, (2) the robustness of the plans for the next life-cycle phase, (3) the availability of resources to execute the next phase, and (4) the acceptability of the acquisition risk of entering the next phase. For applicable projects, this assessment also includes the readiness to award any major contracting efforts needed to execute the next phase. During the end of phase review process, the decision maker may (1) direct the project to move forward into the next life-cycle phase (including awarding contracts), (2) direct the project to remain in the Planning Phase pending completion of delayed activities or additional risk reduction efforts, or (3) terminate the project. DELIVERABLES Acquisition Plan

5.3 5.3.1

This document shows how all government human resources, contractor support services, hardware, software and telecommunications capabilities are acquired during the life of the project. The plan is developed to help insure that needed resources can be obtained and are available when needed. An outline is provided in Appendix C-6 detailing the types of information that should be included in the Acquisition Plan 5.3.2 Configuration Management Plan

The CM Plan describes the process that will be used to identify, manage, control, and audit the projects configuration. The plan should also define the configuration management structure, roles, and responsibilities to be used in executing these processes. Appendix C-7 provides a template for the Configuration Management Plan. 5.3.3 Quality Assurance Plan

The QA Plan documents that the delivered products satisfy contractual agreements, meet or exceed quality standards, and comply with the approved SDLC processes. Appendix C-8 provides a template for the Quality Assurance Plan. 5.3.4 Concept of Operations

The CONOPS is a high level requirements document that provides a mechanism for users to describe their expectations from the system. Information that should be included in the CONOPS document is shown in Appendix C-9.

5.3.5

System Security Plan

A formal plan detailing the types of computer security is required for the new system based on the type of information being processed and the degree of sensitivity. Usually, those systems that contain personal information will be more closely safeguarded than most. See also NIST Special Publication 800-18, Guide for Developing Security Plans for Information Technology Systems, November 1998 at http://csrc.nist.gov/publications/nistpubs/index.html. An outline is provided in appendix C-10 detailing the information that is included in the System Security Plan. 5.3.6 Project Management Plan

This plan should be prepared for all projects, regardless of size or scope. It documents the project scope, tasks, schedule, allocated resources, and interrelationships with other projects. The plan provides details on the functional units involved, required job tasks, cost and schedule performance measurement, milestone and review scheduling. Revisions to the Project Management Plan occur at the end of each phase and as information becomes available. The Project Management Plan should address the management oversight activities of the project. See Appendix C-11 for Project Management Plan Outline. 5.3.7 Validation and Verification Plan

The Validation and Verification Plan describes the testing strategies that will be used throughout the life-cycle phases. This plan should include descriptions of contractor, government, and appropriate independent assessments required by the project. Appendix C-12 provides a template for the Validation and Verification Plan. 5.3.8 Systems Engineering Management Plan

The SEMP describes the system engineering process to be applied to the project; assigns specific organizational responsibilities for the technical effort, and references technical processes to be applied to the effort. Information that should be included in the SEMP are shown in Appendix C-13. 5.4 5.4.1 ISSUES FOR CONSIDERATION Audit Trails

Audit trails, capable of detecting security violations, performance problems and flaws in applications should be specified. Include the ability to track activity from the time of logon, by user ID and location of the equipment, until logoff. Identify any events that are to be maintained regarding the operating system, application and user activity.

5.4.2

Access Based on Need to Know

Prior to an individual being granted access to the system, the program managers office should determine each individuals Need to Know and should permit access to only those areas necessary to allow the individual to adequately perform her/her job. 5.5 PHASE REVIEW ACTIVITY

Upon completion of all Planning Phase tasks and receipt of resources for the next phase, the Project Manager, together with the project team should prepare and present a project status review for the decision maker and project stakeholders. The review should address: (1) Planning Phase activities status, (2) planning status for all subsequent life-cycle phases (with significant detail on the next phase, to include the status of pending contract actions), (3) resource availability status, and (4) acquisition risk assessments of subsequent life cycle phases given the planned acquisition strategy.

CHAPTER REQUIREMENTS ANALYSIS PHASE


6.0 6.1 OBJECTIVE

TASKS AND ACTIVITIES 6.1.1 Analyze and Document Requirements 6.1.2 Develop the Test Criteria and Plans 6.1.3 Develop an Interface Control Document 6.1.4 Review and Assess FOIA/PA Requirements 6.1.5 Conduct Functional Review 6.1.6 Revise Previous Documentation ROLES AND RESPONSIBILITIES DELIVERABLES 6.3.1 Functional Requirements 6.3.2 Test and Evaluation 6.3.3 Interface Control 6.3.4 Privacy Act Notice/Privacy Impact Assessment ISSUES FOR CONSIDERATION PHASE REVIEW ACTIVITY OBJECTIVE Document Plan Document

6.2 6.3

Master

6.4 6.5 6.0

The Requirements Analysis Phase will begin when the previous phase documentation has been approved or by management direction. Documentation related to user requirements from the Planning Phase shall be used as the basis for further user needs analysis and the development of detailed user requirements. The analysis may reveal new insights into the overall information systems requirements, and, in such instances, all deliverables should be revised to reflect this analysis. During the Requirements Analysis Phase, the system shall be defined in more detail with regard to system inputs, processes, outputs, and interfaces. This definition process occurs at the functional level. The system shall be described in terms of the functions to be performed, not in terms of computer programs, files, and data streams. The emphasis in this phase is on determining what functions must be performed rather than how to perform those functions. 6.1 TASKS AND ACTIVITIES

The following tasks are performed during the Requirements Analysis Phase. The tasks and activities actually performed depend on the nature of the project. Guidelines for selection and inclusion of tasks for the Requirements Analysis Phase may be found in Chapter 13, Alternative SDLC Work Patterns.

6.1.1

Analyze and Document Requirements.

First consolidate and affirm the business needs. Analyze the intended use of the system and specify the functional and data requirements. Connect the functional requirements to the data requirements. Define functional and system requirements that are not easily expressed in data and process models Refine the high level architecture and logical design to support the system and functional requirements A logical model is constructed that describes the fundamental processes and data needed to support the desired business functionality. This logical model will show how processes interact and how processes create and use data. These processes will be derived from the activity descriptions provided in the System Boundary Document. Functions and entity types contained in the logical model are extended and refined from those provided in the Concept Development Phase. End-users and business area experts will evaluate all identified processes and data structures to ensure accuracy, logical consistency, and completeness. An analysis of business activities and data structures is performed to produce entity-relationship diagrams, process hierarchy diagrams, process dependency diagrams, and associated documentation. An interaction analysis is performed to define the interaction between the business activities and business data. This analysis produces process logic and action diagrams, definitions of the business algorithms, entity life-cycle diagrams, and entity state change matrices. A detailed analysis of the current technical architecture, application software, and data is conducted to ensure that limitations or unique requirements have not been overlooked. Include all possible requirements including those for:

functional and capability specifications, including performance, physical characteristics, and environmental conditions under which the software item is to perform; interfaces external to the software item; qualification requirements safety specifications, including those related to methods of operation and maintenance, environmental influences, and personnel injury; security specifications, including those related to compromise of sensitive information human-factors engineering (ergonomics), including those related to manual operations, human-equipment interactions, constraints on personnel, and areas needed concentrated human attention, that are sensitive to human errors and training data definition and database requirements; installation and acceptance requirements of the delivered software product at the operation and maintenance site(s) user documentation user operation and execution requirements user maintenance requirements

6.1.2

Develop Test Criteria and Plans

Establish the test criteria and begin test planning. Include all areas where testing will take place and who is responsible for the testing. Identify the testing environment, what tests will be performed, test procedures; and traceability back to the requirements. Describe what will be tested in terms of the data or information. If individual modules are being tested separately, this needs to be stated in the Master Plan. Smaller plans may be needed for specialized testing, but they should all be referenced in the Master Plan. 6.1.3 Develop an Interface Control Document

The project team responsible for the development of this system needs to articulate the other systems (if any) this system will interface with. Identify any interfaces and the exchange of data or functionality that occurs. All areas that connect need to be documented for security as well as information flow purposes. 6.1.4 Review and Assess FOIA/PA Requirements The FOIA/PA describes the process and procedures for compliance with personal identifier information. A Records Management representative will determine if what you plan constitutes a system as a Privacy Act System of Records. A system of records notice must be published for each new system of records that is established or existing system of records that is revised. If needed, a Privacy Act Notice for the Federal Register will be prepared. The collection, use, maintenance, and dissemination of information on individuals by any Department component require a thorough analysis of both legal and privacy policy issues. Whether a system is automated or manual, privacy protections should be integrated into the development of the system. To ensure that the Department properly addresses the privacy concerns of individuals as systems are developed, Departmental policy mandates that components develop and utilize the Privacy Impact Assessment (PIA) processes. Federal regulations require that all records no longer needed for the conduct of the regular business of the agency be disposed of, retired, or preserved in a manner consistent with official Records Disposition Schedules. The decisions concerning the disposition criteria, including when and how records are to be disposed, and the coordination with the Records Management representatives to prepare the Records Disposition Schedule for the proposed system, shall be the responsibilities of the Project Manager. 6.1.5 Conduct Functional Review

The Functional and Data Requirements Review is conducted in the Requirements Analysis Phase by the technical review board. This is where the functional requirements identified in the FRD are reviewed to see if they are sufficiently detailed

and are testable. It also provides the Project Manager with the opportunity to ensure a complete understanding of the requirements and that the documented requirements can support a detailed design of the proposed system. 6.1.6 Revise Previous Documentation

Review and update previous phase documentation if necessary before moving to the next phase. 6.2

ROLES AND RESPONSIBILITIES Project Manager. The project manager is responsible and accountable for the successful execution of the Requirements Analysis Phase. The project manager is responsible for leading the team that accomplishes the tasks shown above. The Project Manager is also responsible for reviewing deliverables for accuracy, approving deliverables and providing status reports to managers. Technical Review Board. Formally established board that examines the functional requirements documented in the FRD for accuracy, completeness, clarity, attainability, and traceability to the high-level requirements identified in the Concept of Operations. Project Team. The project team members (regardless of the organization of permanent assignment) are responsible for accomplishing assigned tasks as directed by the project manager. Contracting Officer. The contracting officer is responsible and accountable for the procurement activities and signs contract awards. CIO/ERB. Agency oversight activities, including the Executive Review Board office, provide advice and counsel to the project manager on the conduct and requirements of the Requirements Analysis Phase effort. Additionally, oversight activities provide information, judgements, and recommendations to the agency decision makers during project reviews and in support of project decision milestones. DELIVERABLES Functional Requirements Document

6.3 6.3.1

Serves as the foundation for system design and development; captures user requirements to be implemented in a new or enhanced system; the systems subject matter experts document these requirements into the requirements traceability matrix, which shows mapping of each detailed functional requirement to its source. This is a complete, user oriented functional and data requirements for the system which must be defined, analyzed, and documented to ensure that user and system requirements have been collected and documented. All requirements must include considerations for capacity and growth. Where feasible, I-CASE tools should be used to assist in the analysis, definition, and documentation. The requirements document should include but is not limited to records and privacy act, electronic record management, record disposition schedule, and components

unique requirements. Consideration must also be given to persons with disabilities as required by the Rehabilitation Act, 20 U.S.C., Sec 794d (West Supp. 1999). Appendix C-14 provides a template for the Functional Requirements Document. 6.3.2 Test and Evaluation Master Plan

Ensures that all aspects of the system are adequately tested and can be implemented; documents the scope, content, methodology, sequence, management of, and responsibilities for test activities. Unit, integration, and independence acceptance testing activities are performed during the development phase. Unit and integration tests are performed under the direction of the project manager. Independence acceptance testing is performed independently from the developing team and is coordinated with the Quality Assurance (QA) office. Acceptance tests will be performed in a test environment that duplicates the production environment as much as possible. They will ensure that the requirements are defined in a manner that is verifiable. They will support the traceability of the requirements form the source documentation to the design documentation to the test documentation. They will also verify the proper implementation of the functional requirements. Appendix C-15 provides a template for the Test and Evaluation Master Plan. The types of test activities discussed in the subsequent sections are identified more specifically in the Integration and Test Phase of the life cycle and are included in the test plan and test analysis report. Unit/Module Testing Subsystem Integration Testing Independent Security Testing Functional Qualification Testing User Acceptance Testing Beta Testing 6.3.3 Interface Control Document The Interface Control Document (ICD) provides an outline for use in the specification of requirements imposed on one or more systems, subsystems configuration items or other system components to achieve one or more interfaces among these entities. Overall, an ICD can cover requirements for any number of interfaces between any number of systems. Appendix C-16 provides a template for the Interface Control Document. 6.3.4 Privacy Act Notice/Privacy Impact Assessment

For any system that has been determined to be an official System of Records (in terms of the criteria established by the Privacy Act (PA)), a special System of Records Notice shall be published in the Federal Register. This Notice identifies the purpose of

the system; describes its routine use and what types of information and data are contained in its records; describes where and how the records are located; and identifies who the System Manager is. While the Records Management Representatives are responsible for determining if a system is a PA System of Records, it is the Project Managers responsibility to prepare the actual Notice for publication in the Federal Register. As with the Records Disposition Schedule, however, it is the Project Managers responsibility to coordinate with and assist the System Proponent in preparing the PA Notice. The System of Records Notice shall be a required deliverable for the Requirements Analysis Phase of system development. The Privacy Impact Assessment is also a deliverable in this Phase. This is a written evaluation of the impact that the implementation of the proposed system would have on privacy. Guidance for preparing a privacy impact assessment can be found at http://10.173.2.12/jmd/irm/imss/itsecurity/itsecurityhome.html 6.4 ISSUES FOR CONSIDERATION

In the Requirements Analysis Phase, it is important to get everyone involved with the project to discuss and document their requirements. A baseline is important in order to begin the next phase. The requirements from the FRD may become part of a solicitation in the Acquisition Plan. 6.5 PHASE REVIEW ACTIVITY

Upon completion of all Requirements Analysis Phase tasks and receipt of resources for the next phase, the Project Manager, together with the project team should prepare and present a project status review for the decision maker and project stakeholders. The review should address: (1) Requirements Analysis Phase activities status, (2) planning status for all subsequent life cycle phases (with significant detail on the next phase, to include the status of pending contract actions), (3) resource availability status, and (4) acquisition risk assessments of subsequent life cycle phases given the planned acquisition strategy.

CHAPTER DESIGN PHASE


7.0 7.1 OBJECTIVE TASKS AND 7.1.1 Establish the Application 7.1.2 Design the 7.1.3 Develop Maintenance 7.1.4 Develop Operations 7.1.5 Conduct Preliminary Design 7.1.6 Design Human Performance Support 7.1.7 Design Conversion/Migration/Transition 7.1.8 Conduct Security Risk 7.1.9 Conduct Critical Design 7.1.10 Revise Previous Documentation ROLES AND RESPONSIBILITIES

ACTIVITIES Environment Application Manual Manual Review (Training) Strategies Assessment Review

7.2 7.3

DELIVERABLES 7.3.1 Security Risk Assessment 7.3.2 Conversion Plan 7.3.3 System Design Document 7.3.4 Implementation Plan 7.3.5. Maintenance Manual 7.3.6 Operations Manual or System Administration Manual 7.3.7 Training Plan 7.3.8 User Manual ISSUES FOR 7.4.1 Project 7.4.2 Security Issues PHASE REVIEW ACTIVITY OBJECTIVE Decision CONSIDERATION Issues

7.4

7.5 7.0

The objective of the Design Phase is to transform the detailed, defined requirements into complete, detailed specifications for the system to guide the work of the Development Phase. The decisions made in this phase address, in detail, how the system will meet the defined functional, physical, interface, and data requirements. Design Phase activities may be conducted in an iterative fashion, producing first a general system design that emphasizes the functional features of the system, then a more detailed system design that expands the general design by providing all the technical detail.

7.1

TASKS AND ACTIVITIES

The following tasks are performed during the Design Phase. The tasks and activities actually performed depend on the nature of the project. Guidelines for selection and inclusion of tasks for the Design Phase may be found in Chapter 13, Alternate SDLC Work Patterns. 7.1.1 Establish the Application Environment

Identify/specify the target, the development and the design and testing environment. How and where will the application reside. Describe the architecture where this application will be developed and tested and who is responsible for this activity. 7.1.2 Design the Application

In the system design, first the general system characteristics are defined. The data storage and access for the database layer need to be designed. The user interface at the desktop layer needs to be designed. The business rules layer or the application logic needs to be designed. Establish a top-level architecture of the system and document it. The architecture shall identify items of hardware, software, and manual-operations. All the system requirements should be allocated among the hardware configuration items, software configuration items, and manual operations. Transform the requirements for the software item into an architecture that describes its top-level structure and identifies the software components. Ensure that all the requirements for the software item are allocated to its software components and further refined to facilitate detailed design. Develop and document a top-level design for the interfaces external to the software item and between the software components of the software item. 7.1.3 Develop Maintenance Manual

Develop the maintenance manual to ensure continued operation of the system once it is completed. 7.1.4 Develop Operations Manual

Develop the Operations Manual for mainframe systems/applications and the System Administration Manual for client/server systems/applications. 7.1.5 Conduct Preliminary Design Review

This is an ongoing interim review of the system design as it evolves through the Design Phase. This review determines whether the initial design concept is consistent with the overall architecture and satisfies the functional, security, and technical requirements in the Functional Requirements Document.

7.1.6

Design Human Performance Support (Training)

Identify the users and how they will be trained on the new system. Be sure to address the Americans with Disabilities Act (ADA) requirements to ensure equal access to all individuals. 7.1.7 Design Conversion/Migration/Transition Strategies

If current information needs to be converted/migrated/transitioned to the new system, plans need to be designed for those purposes, especially if converting means reengineering existing processes. 7.1.8 Conduct a Security Risk Assessment

Conduct a security risk assessment by addressing the following components: assets, threats, vulnerabilities, likelihood, consequences and safeguards. The risk assessment evaluates compliance with baseline security requirements, identifies threats and vulnerabilities, and assesses alternatives for mitigating or accepting residual risks. 7.1.9 Conduct Critical Design Review

The Project Manager and System Proponent conduct the critical design review and approve/disapprove the project into the Development Phase. This review is conducted at the end of the Design Phase and verifies that the final system design adequately addresses all functional, security, and technical requirements and is consistent with the overall architecture. 7.1.10 Revise Previous Documentation Review documents from the previous phases and assess the need to revise them during the Design Phase. The updates should by signed off by the Project Manager. 7.2

ROLES AND RESPONSIBILITIES Project Manager. The project manager is responsible and accountable for the successful execution of the Design Phase. The project manager is responsible for leading the team that accomplishes the tasks shown above. The Project Manager is also responsible for reviewing deliverables for accuracy, approving deliverables and providing status reports to management. Project Team. The project team members (regardless of the organization of permanent assignment) are responsible for accomplishing assigned tasks as directed by the project manager. Contracting Officer. The contracting officer is responsible and accountable for procurement activities and signs contract awards. Oversight Activities. Agency oversight activities, including the IRM office, provide advice and counsel to the project manager on the conduct and requirements of the Design Phase. Additionally, oversight activities provide information, judgements, and recommendations to the agency decision makers during project reviews and in support of project decision milestones.

7.3

DELIVERABLES

The content of these deliverables may be expanded or abbreviated depending on the size, scope, and complexity of the corresponding systems development effort. 7.3.1 Security Risk Assessment

The purpose of the risk assessment is to analyze threats to and vulnerabilities of a system to determine the risks (potential for losses), and using the analysis as a basis for identifying appropriate and cost-effective measures. Appendix C-17 provides a template for the Security Risk Assessment. 7.3.2 Conversion Plan

The Conversion Plan describes the strategies involved in converting data from an existing system to another hardware or software environment. It is appropriate to reexamine the original systems functional requirements for the condition of the system before conversion to determine if the original requirements are still valid. Appendix C-18 provides a template for the Conversion Plan. 7.3.3 System Design Document

Describes the system requirements, operating environment, system and subsystem architecture, files and database design, input formats, output layouts, human-machine interface, detailed design, processing logic, and external interfaces. It is used in conjunction with the Functional Requirements Document (FRD), which is finalized in this phase, to provide a complete system specification of all user requirements for the system and reflects the users perspective of the system design. Includes all information required for the review and approval of the project development. The sections and subsections of the design document may be organized, rearranged, or repeated as necessary to reflect the best organization for a particular project. Appendix C-19 provides a template for the System Design Document. 7.3.4 Implementation Plan

The Implementation Plan describes how the information system will be deployed and installed into an operational system. The plan contains an overview of the system, a brief description of the major tasks involved in the implementation, the overall resources needed to support the implementation effort (such as hardware, software, facilities, materials, and personnel), and any site-specific implementation requirements. This plan is updated during the Development Phase; the final version is provided in the Integration and Test Phase and used for guidance during the Implementation Phase. Appendix C-20 provides a template for the Implementation Plan. 7.3.5 Maintenance Manual

The Maintenance Manual provides maintenance personnel with the information necessary to maintain the system effectively. The manual provides the definition of the software support environment, the roles and responsibilities of maintenance

personnel, and the regular activities essential to the support and maintenance of program modules, job streams, and database structures. In addition to the items identified for inclusion in the Maintenance Manual, additional information may be provided to facilitate the maintenance and modification of the system. Appendices to document various maintenance procedures, standards, or other essential information may be added to this document as needed. Appendix C-21 provides a template for the Maintenance Manual. 7.3.6 Operations Manual or Systems Administration Manual

For mainframe systems, the Operations Manual provides computer control personnel and computer operators with a detailed operational description of the information system and its associated environments, such as machine room operations and procedures. The Systems Administration Manual serves the purpose of an Operations Manual in distributed (client/server) applications. Appendix C-22 provides a template for the Operations Manual and appendix C-23 provides a template for the Systems Administration Manual. 7.3.7 Training Plan

The Training Plan outlines the objectives, needs, strategy, and curriculum to be addressed when training users on the new or enhanced information system. The plan presents the activities needed to support the development of training materials, coordination of training schedules, reservation of personnel and facilities, planning for training needs, and other training-related tasks. Training activities are developed to teach user personnel the use of the system as specified in the training criteria. Includes the target audience and topics on which training must be conducted on the list of training needs. It includes, in the training strategy, how the topics will be addressed and the format of the training program, the list of topics to be covered, materials, time, space requirements, and proposed schedules. Appendix C-24 provides a template for the Training Plan. 7.3.8 User Manual

The User Manual contains all essential information for the user to make full use of the information system. This manual includes a description of the system functions and capabilities, contingencies and alternate modes of operation, and step-by-step procedures for system access and use. Appendix C-25 provides a template for the User Manual. 7.4 7.4.1 ISSUES FOR CONSIDERATION Project Decision Issues

The decisions of this phase re-examine in greater detail many of the parameters addressed in previous phases. The design prepared in this phase will be the basis for the activities of the Development Phase. The overall objective is to establish a complete design for the system. The pre-requisites for this phase are the Project Plan, Functional Requirements Document, and Test Plan. A number of project approach, project execution, and project continuation decisions are made in this phase.

Project approach decisions include


Identifying existing or COTS components that can be used, or economically modified, to satisfy validated functional requirements. Using appropriate prototyping to refine requirements and enhance user and developer understanding and interpretation of requirements. Selecting specific methodologies and tools to be used in the later life cycle phases, especially the Development and Implementation Phases. Determining how user support will be provided, how the remaining life cycle phases will be integrated, and newly identified risks and issues handled.

Project execution decisions include


Modifications that must be made to the initial information system need Modifications that will be made to current procedures Modifications that will be made to current systems/databases or to other systems/databases under development How conversion of existing data will occur

Project continuation decisions include


The continued need of the information system to exist The continued development activities based on the needs addressed by the design Availability of sufficient funding and other required resources for the remainder of the systems life cycle

The system user community shall be included in the Design Phase actions as needed. It is also in the Design Phase that new or further requirements might be discovered that are necessary to accommodate individuals with disabilities. If so, these requirements shall be added to the FRD. 7.4.2 Security Issues

The developer shall obtain the requirements from the System Security Plan and the FRD and allocate them to the specific modules within the design for enforcement purposes. For example, if a requirement exists to audit a specific set of user actions, the developer may have to add a work flow module into the design to accomplish the auditing. Detailed security requirements provide users and administrators with instructions on how to operate and maintain the system securely. They should address all applicable computer and telecommunications security requirements, including: system access controls; marking, handling, and disposing of magnetic media and hard copies; computer room access; account creation, access, protection, and capabilities; operational procedures; audit trail requirements; configuration management; processing area security; employee check-out; and emergency procedures. Security operating procedures may be created as separate documents or added as sections or appendices to the User and Operations Manuals. This activity should be conducted during the Design Phase.

7.5

PHASE REVIEW ACTIVITY

Upon completion of all Design Phase tasks and receipt of resources for the next phase, the Project Manager, together with the project team should prepare and present a project status review for the decision maker and project stakeholders. The review should address: (1) Design Phase activities status, (2) planning status for all subsequent life cycle phases (with significant detail on the next phase, to include the status of pending contract actions), (3) resource availability status, and (4) acquisition risk assessments of subsequent life cycle phases given the planned acquisition strategy.

CHAPTER DEVELOPMENT PHASE


8.0 8.1 OBJECTIVE TASKS AND 8.1.1 Code and 8.1.2 Integrate 8.1.3 Conduct Software 8.1.4 Integrate 8.1.5 Conduct System 8.1.6 Install 8.1.7 Document Software 8.1.8 Revise Previous Documentation ROLES AND RESPONSIBILITIES DELIVERABLES 8.3.1 8.3.2 Software 8.3.3 System 8.3.4 8.3.5 Integration Document Contingency Development (Application) Test

ACTIVITIES Test Software Software Qualification Testing System Qualification Testing Software Acceptance

8.2 8.3

Plan Document Software Files/Data

8.4 8.5 8.0

ISSUES FOR CONSIDERATION PHASE REVIEW ACTIVITY OBJECTIVE

The objective of the Development Phase will be to convert the deliverables of the Design Phase into a complete information system. Although much of the activity in the Development Phase addresses the computer programs that make up the system, this phase also puts in place the hardware, software, and communications environment for the system and other important elements of the overall system. The activities of this phase translate the system design produced in the Design Phase into a working information system capable of addressing the information system requirements. The development phase contains activities for building the system, testing the system, and conducting functional qualification testing, to ensure the system functional processes satisfy the functional process requirements in the Functional Requirements Document (FRD). At the end of this phase, the system will be ready for the activities of the Integration and Test Phase.

8.1 8.1.1

TASKS AND ACTIVITIES Code and Test Software

Code each module according to established standards. 8.1.2 Integrate Software

Integrate the software units, components and modules. Integrate the software units and software components and test in accordance with the integration plan. Ensure that each module satisfies the requirements of the software at the conclusion of the integration activity. 8.1.3 Conduct Software Qualification Testing.

Conduct qualification testing in accordance with the qualification requirements for the software item. Ensure that the implementation of each software requirement is tested for compliance. Support audit(s) which could be conducted to ensure that:

as-coded software products (such as software item) reflect the design documentation the acceptance review and testing requirements prescribed by the documentation are adequate for the acceptance of the software products test data comply with the specification software products were successfully tested and meet their specifications test reports are correct and discrepancies between actual and expected results have been resolved user documentation complies with standards as specified

The results of the audits shall be documented. If both hardware and software are under development or integration, the audits may be postponed until the System Qualification Testing. Upon successful completion of the audits, if conducted, update and prepare the deliverable software product for System Integration, System Qualification Testing, Software Installation, or Software Acceptance Support as applicable. Also, establish a baseline for the design and code of the software item. 8.1.4 Integrate System

Integrate the software configuration items with hardware configuration items, manual operations, and other systems as necessary, into the system. The aggregates shall be tested, as they are developed, against their requirements. The integration and the test results shall be documented. For each qualification requirement of the system, a set of tests, test cases (inputs, outputs, test criteria), and test procedures for conducting System Qualification Testing shall be developed and documented. Ensure that the integrated system is ready for System Qualification Testing.

8.1.5

Conduct System Qualification Testing.

Conduct system qualification testing in accordance with the qualification requirements specified for the system. Ensure that the implementation of each system requirement is tested for compliance and that the system is ready for delivery. The qualification testing results shall be documented. 8.1.6 Install Software

Install the software product in the target environment as designed and in accordance with the Installation Plan. The resources and information necessary to install the software product shall be determined and be available. The developer shall assist the acquirer with the set-up activities. Where the installed software product is replacing an existing system, the developer shall support any parallel running activities that are required. Ensure that the software code and databases initialize, execute, and terminate as specified in the contract. The installation events and results shall be documented. 8.1.7 Document Software Acceptance Support.

Acceptance review and testing shall consider the results of the Joint Reviews, Audits, Software Qualification Testing, and System Qualification Testing (if performed). The results of the acceptance review and testing shall be documented. The developer shall complete and deliver the software product as specified. The developer shall provide initial and continuing training and support to the acquirer as specified. 8.1.8 Revise Previous Documentation

Review and update previous phase documentation, as needed. 8.2

ROLES AND RESPONSIBILITIES Project Manager. The project Manager is responsible and accountable for the successful execution of the Development Phase. The project Manager is responsible for leading the team that accomplishes the tasks shown above. The Project Manager is also responsible for reviewing deliverables for accuracy, approving deliverables and providing status reports to management. Project Team. The project team members (regardless of the organization of permanent assignment) are responsible for accomplishing assigned tasks as directed by the project manager. Contracting Officer. The contracting officer is responsible and accountable for the procurement activities and signs contract awards. Oversight Activities. Agency oversight activities, including the IRM office, provide advice and counsel to the project manager on the conduct and requirements of the Development Phase. Additionally, oversight activities provide information, judgements, and recommendations to the agency decision makers during project reviews and in support of project decision milestones.

Developer. The developer is responsible for the development activities to include coding, testing, documenting and delivering the completed system. DELIVERABLES

8.3

The content of these deliverables may be expanded or abbreviated depending on the size, scope, and complexity of the corresponding systems development effort. The following deliverables shall be initiated during the Development Phase: 8.3.1 Contingency Plan

The Contingency Plan contains emergency response procedures; backup arrangements, procedures, and responsibilities; and post-disaster recovery procedures and responsibilities. Contingency planning is essential to ensure that systems are able to recover from processing disruptions in the event of localized emergencies or largescale disasters. It is an emergency response plan, developed in conjunction with application owners and maintained at the primary and backup computer installation to ensure that a reasonable continuity of support is provided if events occur that could prevent normal operations. Contingency plans shall be routinely reviewed, updated, and tested to enable vital operations and resources to be restored as quickly as possible and to keep system downtime to an absolute minimum. A Contingency Plan is synonymous with a disaster plan and an emergency plan. If the system/subsystem is to be located within a facility with an acceptable contingency plan, system-unique contingency requirements should be added as an annex to the existing facility contingency plan. Appendix C-26 provides a template for the Contingency Plan. 8.3.2 Software Development Document Contains documentation pertaining to the development of each unit or module, including the test cases, software, test results, approvals, and any other items that will help explain the functionality of the software Appendix C-27 provides a template for the Software Development Document. 8.3.3 System (Application) Software This is the actual software developed. It is used for the Test and Integration Phase and finalized before implementation of the system. Include all the disks (or other medium) used to store the information. 8.3.4 Test Files/Data All the information used for system testing should be provided at the end of this phase. Provide the actual test data and files used. 8.3.5 Integration Document The Integration Document explains how the software components, hardware components, or both are combined and the interaction between them. Appendix C-28 provides a template for the Integration Document.

8.4

ISSUES FOR CONSIDERATION

There are three phase prerequisites that should be completed before beginning this phase.

Project management plan and schedule indicating target date for completion of each module and target date for completion of system testing. System design document, containing program logic flow, identifying any existing code to be used, and the subsystems with their inputs and outputs. Unit/module and integration test plans, containing testing requirements, schedules, and test case specifications for unit and integration testing. PHASE REVIEW ACTIVITY

8.5

Upon completion of all Development Phase tasks and receipt of resources for the next phase, the Project Manager, together with the project team should prepare and present a project status review for the decision maker and project stakeholders. The review should address: (1) Development Phase activities status, (2) planning status for all subsequent life cycle phases (with significant detail on the next phase, to include the status of pending contract actions), (3) resource availability status, and (4) acquisition risk assessments of subsequent life cycle phases given the planned acquisition strategy.

CHAPTER INTEGRATION AND TEST PHASE


9.0 9.1 OBJECTIVE TASKS AND 9.1.1 Establish the Test 9.1.2 Conduct Integration 9.1.3 Conduct Subsystem/System 9.1.4 Conduct Security 9.1.5 Conduct Acceptance 9.1.6 Revise Previous Documentation ROLES AND RESPONSIBILITIES DELIVERABLES 9.3.1 Test Analysis 9.3.2 Test Analysis Approval 9.3.3 Test Problem 9.3.4 IT Systems Security Certification & Accreditation ISSUES FOR CONSIDERATIONS PHASE REVIEW ACTIVITY OBJECTIVE

ACTIVITIES Environment Tests Testing Testing Testing

9.2 9.3

Report Determination Report

9.4 9.5 9.0

The objective of this phase is to prove that the developed system satisfies the requirements defined in the FRD. Several types of tests will be conducted in this phase. First, subsystem integration tests shall be executed and evaluated by the development team to prove that the program components integrate properly into the subsystems and that the subsystems integrate properly into an application. Next, the testing team conducts and evaluates system tests to ensure the developed system meets all technical requirements, including performance requirements. Next, the testing team and the Security Program Manager conduct security tests to validate that the access and data security requirements are met. Finally, users participate in acceptance testing to confirm that the developed system meets all user requirements as stated in the FRD. Acceptance testing shall be done in a simulated real user environment with the users using simulated or real target platforms and infrastructures. 9.1 TASKS AND ACTIVITIES

The tasks and activities actually performed depend on the nature of the project. Guidelines for selection and inclusion of tasks for the Integration and Test Phase may be found in Chapter 13, Alternate SDLC Work Patterns. The following tasks should be completed during the Integration and Test phase.

9.1.1

Establish the Test Environment

Establish the various test teams and ensure the test system(s) are ready. 9.1.2 Conduct Integration Tests

The test and evaluation team is responsible for creating/loading the test database(s) and executing the integration test(s). This is to ensure that program components integrate properly into the subsystems and the subsystems integrate properly into an application. 9.1.3 Conduct Subsystem/System Testing

The test and evaluation team is responsible for creating/loading the test database(s) and executing the system test(s). All results should be documented on the Test Analysis Report (Appendix C-28), Test Problem Report (Appendix C-30) and on the Test Analysis Approval Determination (Appendix C-29). Any failed components should be migrated back to the development phase for rework, and the passed components should be migrated ahead for security testing. 9.1.4 Conduct Security Testing

The test and evaluation team will again create or load the test database(s) and execute security (penetration) test(s). All tests will be documented, similar to those above. Failed components will be migrated back to the development phase for rework, and passed components will be migrated ahead for acceptance testing. 9.1.5 Conduct Acceptance Testing

The test and evaluation team will create/load the test database(s) and execute the acceptance test(s). All tests will be documented , similar to those above. Failed components will be migrated back to the development phase for rework, and passed components will migrate ahead for implementation. 9.1.6 Revise previous documentation

During this phase, the Systems Technical Lead or the Developers will finalize the Software Development Document from the Development Phase. He/They will also finalize the Operations or Systems Administration Manual, User Manual, Training Plan, Maintenance Manual, Conversion Plan, Implementation Plan, Contingency Plan and Update the Interface Control Document from the Design Phase. The Project Manager should finalize the System Security Plan and the Security Risk Assessment from the Requirements Analysis Phase and the Project Management Plan from the Planning Phase. The Configuration Manager should finalize the Configuration Management Plan from the Planning Phase. The Quality Assurance office/person should finalize the Quality Assurance Plan from the Planning Phase. And finally, the Project Manager should finalize the Cost Benefit Analysis and the Risk Management Plan from the System Concept Development Phase.

9.2

ROLES AND RESPONSIBILITIES Project Manager. The project manager is responsible and accountable for the successful execution of the Integration and Test Phase. The project manager is responsible for leading the team that accomplishes the tasks shown above. The Project Manager is also responsible for reviewing deliverables for accuracy, approving deliverables and providing status reports to management. Project Team. The project team members (regardless of the organization of permanent assignment) are responsible for accomplishing assigned tasks as directed by the project manager. This includes establishing the test environment. Contracting Officer. The contracting officer is responsible and accountable for the procurement activities and signs contract awards. Security Program Manager. The Security Program Manager is responsible for conducting security tests according to the Systems Security Plan. Oversight Activities. Agency oversight activities, including the IRM office, provide advice and counsel for the project manager on the conduct and requirements of the Integration and Test Phase. Additionally, oversight activities provide information, judgements, and recommendations to the agency decision makers during project reviews and in support of project decision milestones. User. Users participate in acceptance testing to ensure systems perform as expected. DELIVERABLES

9.3

The following deliverables shall be initiated during the Integration and Test Phase: 9.3.1 Test Analysis Report

This report documents each test - unit/module, subsystem integration, system, user acceptance and security. Appendix C-29 provides a template for the Test Analysis Report. 9.3.2 Test Analysis Approval Determination

Attached to the test analysis report as a final result of the test reviews and testing levels above the integration test; briefly summarizes the perceived readiness for migration of the software. Appendix C-30 provides a template for the Test Analysis Approval Determination. 9.3.3 Test Problem Report

Document problems encountered during testing; the form is attached to the test analysis reports. Appendix C-31provides a template for a Test Problem Report. 9.3.4 IT Systems Security Certification & Accreditation

The documents needed to obtain certification and accreditation of an information system before it becomes operational. They include: System Security Plan; Rules of Behavior; Configuration Management Plan, Risk Assessment; Security Test & Evaluation; Contingency Plan; Privacy Impact Assessments; and the certification and accreditation memorandums. The Systems Security Plan and certification/accreditation package should be approved prior to implementation and every three years thereafter. See for the required documents that lead to accreditation. 9.4 ISSUES FOR CONSIDERATION

Security controls shall be tested before system implementation to uncover all design and implementation flaws that would violate security policy. Security Test and Evaluation (ST&E) involves determining a systems security mechanisms adequacy for completeness and correctness, and the degree of consistency between system documentation and actual implementation. This shall be accomplished through a variety of assurance methods such as analysis of system design documentation, inspection of test documentation, and independent execution of function testing and penetration testing. Results of the ST&E effect security activities developed earlier in the life cycle such as security risk assessment, sensitive system security plan, and contingency plan. Each of these activities will be updated in this phase based on the results of the ST&E. Build on the security testing recorded in the software development documents, unit testing, integration testing, and system testing. 9.5 PHASE REVIEW ACTIVITY

Upon completion of all Integration and Test Phase tasks and receipt of resources for the next phase, the Project Manger, together with the project team should prepare and present a project status review for the decision maker and project stakeholders. The review should address: (1) Integration and Test Phase activities status, (2) planning status for all subsequent life cycle phases (with significant detail on the next phase, to include the status of pending contract actions), (3) resource availability status, and (4) acquisition risk assessments of subsequent life cycle phases given the planned acquisition strategy.

CHAPTER IMPLEMENTATION PHASE


10.0 10.1 OBJECTIVE TASKS AND 10.1.1 Notify users of new 10.1.2 Execute training 10.1.3 Perform data entry or 10.1.4 Install 10.1.5 Conduct post-implementation 10.1.6 Revise previous documentation ROLES AND RESPONSIBILITIES 10.3.1 Delivered 10.3.2 Change Implementation 10.3.3 Version Description 10.3.4 Post-Implementation Review 10.4 10.5 10.0 ISSUES FOR CONSIDERATION PHASE REVIEW ACTIVITY OBJECTIVE

10

ACTIVITIES implementation plan conversion system review

10.2 10.3

DELIVERABLES System Notice Document

In this phase, the system or system modifications are installed and made operational in a production environment. The phase is initiated after the system has been tested and accepted by the user and Project Manager. Activities in this phase include notification of implementation to end users, execution of the previously defined training plan, data entry or conversion, and post implementation review. This phase continues until the system is operating in production in accordance with the defined user requirements. The new system can fall into three categories, replacement of a manual process, replacement of a legacy system, or upgrade to an existing system. Regardless of the type of system, all aspects of the implementation phase should be followed. This will ensure the smoothest possible transition to the organizations desired goal. 10.1 TASKS AND ACTIVITIES

Tasks and activities in the implementation phase are associated with certain deliverables described in section 10.3. The tasks and activities actually performed depend on the nature of the project. Guidelines for selection and inclusion of tasks for the Implementation Phase may be found in Chapter 13, Alternate SDLC Work Patterns. A description of these tasks and activities is provided below.

10.1.1 Notify users of new implementation The implementation notice should be sent to all users and organizations affected by the implementation. Additionally, it is good policy to make internal organizations not directly affected by the implementation aware of the schedule so that allowances can be made for a disruption in the normal activities of that section. Some notification methods are email, internal memo to heads of departments, and voice tree messages. The notice should include:

The schedule of the implementation; a brief synopsis of the benefits of the new system; the difference between the old and new system; responsibilities of end user affected by the implementation during this phase; and the process to obtain system support, including contact names and phone numbers.

10.1.2 Execute training plan It is always a good business practice to provide training before the end user uses the new system. Because there has been a previously designed training plan established, complete with the system user manual, the execution of the plan should be relatively simple. Typically what prevents a plan from being implemented is lack of funding. Good budgeting should prevent this from happening. 10.1.3 Perform data entry or conversion With the implementation of any system, typically there is old data which is to be included in the new system. This data can be in a manual or an automated form. Regardless of the format of the data, the tasks in this section are two fold, data input and data verification. When replacing a manual system, hard copy data will need to be entered into the automated system. Some sort of verification that the data is being entered correctly should be conducted throughout this process. This is also the case in data transfer, where data fields in the old system may have been entered inconsistently and therefore affect the integrity of the new database. Verification of the old data becomes imperative to a useful computer system. One of the ways verification of both system operation and data integrity can be accomplished is through parallel operations. Parallel operations consists of running the old process or system and the new system simultaneously until the new system is certified. In this way if the new system fails in any way, the operation can proceed on the old system while the bugs are worked out. 10.1.4 Install System To ensure that the system is fully operational, install the system in a production environment.

10.1.5 Conduct post-implementation review After the system has been fielded a post-implementation review is conducted to determine the success of the project through its implementation phase. The purpose of this review is to document implementation experiences to recommend system enhancements and provide guidance for future projects. In addition, change implementation notices will be utilized to document user requests for fixes to problems that may have been recognized during this phase. It is important to document any user request for a change to a system to limit misunderstandings between the end user and the system programmers. 10.1.6 Revise previous documentation During this phase, the ICD is revised from the Requirements Analysis Phase. The CONOPS, System Security Plan, Security Risk Assessment, Software Development Document, System Software and the Integration Document are also revised and finalized during the Implementation Phase. 10.2

ROLES AND RESPONSIBILITIES Project Manager. The project manager is responsible and accountable for the successful execution of the Implementation Phase. The project manager is responsible for leading the team that accomplishes the tasks shown above. The project manager is also responsible for reviewing deliverables for accuracy, approving deliverables and providing status reports to management. Project Team. The project team members (regardless of the organization of permanent assignment) are responsible for accomplishing assigned tasks as directed by the project manager. Contracting Officer. The contracting officer is responsible and accountable for the procurement activities and signs contract awards. Oversight Activities. Agency oversight activities, including the IRM office, provide advice and counsel for the project manager on the conduct and requirements of the Implementation Phase. Additionally, oversight activities provide information, judgements, and recommendations to the agency decision makers during project reviews and in support of project decision milestones. DELIVERABLES

10.3

The following deliverables are initiated during the Implementation Phase: 10.3.1 Delivered System After the Implementation Phase Review and Approval Certification is signed by the Project Manager and the System Proponent representative, the system - including the production version of the data repository - is delivered to the customer for the Operations and Maintenance Phase.

10.3.2 Change Implementation Notice A formal request and approval document for changes made during the Implementation Phase. Appendix C-32 provides a template for a Change Implementation Notice. 10.3.3 Version Description Document The primary configuration control document used to track and control versions of software released to the operational environment. It is a summary of the features and contents for the software build and identifies and describes the version of the software being delivered. Appendix C-33 provides a template for a Version Description Document. 10.3.4 Post-Implementation Review The review is conducted at the end of the Implementation Phase. A postimplementation review shall be conducted to ensure that the system functions as planned and expected; to verify that the system cost is within the estimated amount; and to verify that the intended benefits are derived as projected. Normally, this shall be a one-time review, and it occurs after a major implementation; it may also occur after a major enhancement to the system. The results of an unacceptable review are submitted to the System Proponent for its review and follow-up actions. The System Proponent may decide it will be necessary to return the deficient system to the responsible system development Project Manager for correction of deficiencies. Appendix C-34 provides a template for a Post-Implementation Review. 10.4 ISSUES FOR CONSIDERATION

Once a system has been developed, tested and deployed it will enter the operations and maintenance phase. All development resources and documentation should be transferred to a library or the operations and maintenance staff. 10.5 PHASE REVIEW ACTIVITY

During the Implementation Phase Review, recommendations may be made to correct errors, improve user satisfaction or improve system performance. For contractor development, analysis shall be performed to determine if additional activity is within the scope of the statement of work or within the original contract. An Implementation Phase Review and Approval Certification should be signed off by the Project Manager to verify the acceptance of the delivered system by the system users/owner. The Implementation Phase-End Review shall be organized, planned, and led by the Project Quality Assurance representative.

CHAPTER OPERATIONS AND MAINTENANCE PHASE


11.0 11.1 OBJECTIVE

11

TASKS AND ACTIVITIES 11.1.1 Identify Systems Operations 11.1.2 Maintain Data / Software Administration 11.1.3 Identify Problem and Modification Process 11.1.4 Maintain System / Software 11.1.5 Revise Previous Documentation ROLES AND RESPONSIBILITIES DELIVERABLES 11.3.1 In-Process 11.3.2 User Satisfaction Review Report ISSUES FOR CONSIDERATION PHASE REVIEW ACTIVITY OBJECTIVE Review Report

11.2 11.3

11.4 11.5 11.0

More than half of the life cycle costs are attributed to the operations and maintenance of systems. In this phase, it is essential that all facets of operations and maintenance are performed. The system is being used and scrutinized to ensure that it meets the needs initially stated in the planning phase. Problems are detected and new needs arise. This may require modification to existing code, new code to be developed and/or hardware configuration changes. Providing user support is an ongoing activity. New users will require training and others will require training as well. The emphasis of this phase will be to ensure that the users needs are met and the system continues to perform as specified in the operational environment. Additionally, as operations and maintenance personnel monitor the current system they may become aware of better ways to improve the system and therefore make recommendations. Changes will be required to fix problems, possibly add features and make improvements to the system. This phase will continue as long as the system is in use. 11.1 TASKS AND ACTIVITIES

11.1.1 Identify Systems Operations Operations support is an integral part of the day to day operations of a system. In small systems, all or part of each task may be done by the same person. But in large systems, each function may be done by separate individuals or even separate areas. The Operations Manual is developed in previous SDLC phases. This documents defines tasks, activities and responsible parties and will need to be updated as changes occur. Systems operations activities and tasks need to be scheduled, on a recurring

basis, to ensure that the production environment is fully functional and is performing as specified. The following is a checklist of systems operations key tasks and activities:

Ensure that systems and networks are running and available during the defined hours of Operations; Implement non-emergency requests during scheduled Outages, as prescribed in the Operations Manual; Ensure all processes, manual and automated, are documented in the operating procedures. These processes should comply with the system documentation; Acquisition and storage of supplies (i.e. paper, toner, tapes, removable disk); Perform backups (day-to-day protection, contingency); Perform the physical security functions including ensuring adequate UPS, Personnel have proper security clearances and proper access privileges etc.; Ensure contingency planning for disaster recovery is current and tested ; Ensure users are trained on current processes and new processes; Ensure that service level objectives are kept accurate and are monitored; Maintain performance measurements, statistics, and system logs. Examples of performance measures include volume and frequency of data to be processed in each mode, order and type of operations; Monitor the performance statistics, report the results and escalate problems when they occur.

11.1.2 Maintain Data / Software Administration Data / Software Administration is needed to ensure that input data and output data and data bases are correct and continually checked for accuracy and completeness. This includes insuring that any regularly scheduled jobs are submitted and completed correctly. Software and data bases should be maintained at (or near) the current maintenance level. The backup and recovery processes for data bases are normally different than the day-to-day DASD volume backups. The backup and recovery process of the data bases should be done as a Data / Software Administration task by a data administrator. A checklist of Data / Software Administration tasks and activities are:

Performing a periodic Verification / Validation of data, correct data related problems; Performing production control and quality control functions (Job submission, checking and corrections); Interfacing with other functional areas for Day-to-day checking / corrections; Installing, configuring, upgrading and maintaining data base(s). This includes updating processes, data flows, and objects (usually shown in diagrams); Developing and performing data / data base backup and recovery routines for data integrity and recoverability. Ensure documented properly in the Operations Manual; Developing and maintaining a performance and tuning plan for online process and data bases; Performing configuration/design audits to ensure software, system, parameter configuration are correct.

11.1.3 Identify Problem and Modification Process One fact of life with any system is that change is inevitable. Users need an avenue to suggest change and identified problems. A User Satisfaction Review (Appendix C37 ) which can include a Customer Satisfaction Survey, can be designed and distributed to obtain feedback on operational systems to help determine if the systems are accurate and reliable. Systems administrators and operators need to be able to make recommendations for upgrade of hardware, architecture and streamlining processes. For small in-house systems, modification requests can be handled by an inhouse process. For large integrated systems, modification requests may be addressed in the Requirements document and may take the form of a change package or a formal Change Implementation Notice (Appendix C-32) and may require justification and cost benefits analysis for approval by a review board. The Requirements document for the project may call for a modification cut-off and rollout of the system as a first version and all subsequent changes addressed as a new or enhanced version of the system. A request for modifications to a system may also generate a new project and require a new project initiation plan. 11.1.4 Maintain System / Software Daily operations of the system /software may necessitate that maintenance personnel identify potential modifications needed to ensure that the system continues to operate as intended and produces quality data. Daily maintenance activities for the system, takes place to ensure that any previously undetected errors are fixed. Maintenance personnel may determine that modifications to the system and databases are needed to resolve errors or performance problems. Also modifications may be needed to provide new capabilities or to take advantage of hardware upgrades or new releases of system software and application software used to operate the system. New capabilities may take the form of routine maintenance or may constitute enhancements to the system or database as a response to user requests for new/improved capabilities. New capabilities needs may begin a new problem modification process described above. 11.1.5 Revise Previous Documentation At this phase of the SDLC all security activities have been completed. An update must be made to the System Security plan; an update and test of the contingency plan should be completed. Continuous vigilance should be given to virus and intruder detection. The Project Manager must be sure that security operating procedures are kept updated accordingly. Review and update documentation from the previous phases. In particular, the Operations Manual, SBD and Contingency Plan need to be updated and finalized during the Operations and Maintenance Phase. 11.2 ROLES AND RESPONSIBILITIES

This list briefly outlines some of the roles and responsibilities for key maintenance personnel. Some roles may be combined or eliminated depending upon the size of the system to be maintained. Each system will dictate the necessity for the roles listed below.

Systems Manager. The Systems Manager develops, documents and execute plans and procedures for conducting activities and tasks of the Maintenance Process. To provide for an avenue of problem reporting and customer satisfaction, the Systems Manager should create and discuss communications instructions with the systems customers. Technical Support . Personnel which proved technical support to the program. This support may involve granting access rights to the program. Setup of workstations or terminals to access the system. Maintaining the operating system for both server and workstation. Technical support personnel may be involved with issuing user ids or login names and passwords. In a Client server environment technical support may perform systems scheduled backups and operating system maintenance during downtime. Operations or Operators (turn on/off systems, start tasks, backup etc). For many mainframe systems, technical support for a program is provided by an operator. The operator performs scheduled backup, performs maintenance during downtime and is responsible to ensure the system is online and available for users. Operators may be involved with issuing user ids or login names and passwords for the system. Customers. The customer needs to be able to share with the systems manager the need for improvements or the existence of problems. Some users live with a situation or problem because they feel they must. Customers may feel that change will be slow or disruptive. Some feel the need to create work-arounds. A customer has the responsibility to report problems or make recommendations for changes to a system. Program Analysts or Programmer. Interprets user requirements, designs and writes the code for specialized programs. User changes, improvements, enhancements may be discussed in Joint Application Design sessions. Analysts programs for errors, debugs the program and tests program design. Process Improvement Review Board. A board of individuals may be convened to approve recommendations for changes and improvements to the system. This group may be chartered. The charter should outline what should be brought before the group for consideration and approval. The board may issue a Change Directive. Users Group or Team. A group of computer users who share knowledge they have gained concerning a program or system. They usually meet to exchange information, share programs and can provide expert knowledge for a system under consideration for change. Contracting Officer. The contracting officer is responsible and accountable for the procurement activities and signs contract award. Data Administrator. Performs tasks which ensure that accurate and valid data are entered into the system. Sometimes this person creates the information systems database, maintains the databases security and develops plans for disaster recovery. The data administrator may be called upon to create queries and reports for a variety of user requests. The data administrator responsibilities include maintaining the databases data dictionary. The data dictionary provides a description of each field in the database, the field characteristics and what data is maintained with the field. Telecommunications Analyst and Network System Analyst. Plans, installs, configures, upgrades and maintains networks as needed. If the system requires it, they ensures that external communications and connectivity are available.

Computer Systems Security Officer (CSSO). The CSSO has a review system change requests, review and in some cases Change Impact Assessments, participate in the Configuration process, and conduct and report changes that may be made security posture of the system. DELIVERABLES

requirement to coordinate the Control Board that effect the

11.3

11.3.1 In-Process Review Report The In-Process Review (IPR) occurs at predetermined milestones usually quarterly, but at least once a year. The performance measures should be reviewed along with the health of the system. Performance measures should be measured against the baseline measures. Ad hoc reviews should be called when deemed necessary by either party. Document the results of this review in the IPR Report. Appendix C-35 provides a template for the IPR Report. 11.3.2 User Satisfaction Review Report User Satisfaction Reviews can be used as a tool to determine the current user satisfaction with the performance capabilities of an existing application or initiate a proposal for a new system. This review can be used as input to the IPR Report. Appendix C-36 provides a template for the User Satisfaction Review Report. 11.4 ISSUES FOR CONSIDERATION

11.4.1 Documentation It can not be stressed enough, that proper documentation for the duties performed by each individual responsible for system maintenance and operation should be up-todate. For smooth day to day operations of any system, as well as disaster recovery, each individuals role, duties and responsibilities should be outlined in detail. A systems administrators journal or log of changes performed to the system software or hardware is invaluable in times of emergencies. Operations manuals, journals or logs should be readily accessible by maintenance personnel. 11.4.2 Guidelines in determining New Development from Maintenance Changes to the system should meet the following criteria in order for the change or modification request to be categorized as Maintenance; otherwise it should be considered as New Development :

Estimated cost of modification are below maintenance costs Proposed changes can be implemented within 1 system year Impact to system is minimal or necessary for accuracy of system output

11.4.3 Security Re-certification Federal IT security policy requires all IT systems to be accredited prior to being placed into operation and at least every three years thereafter, or prior to implementation of a significant change. 11.5 PHASE REVIEW ACTIVITY

Review activities occur several times throughout this phase. Each time the system is reviewed, one of three of the following decisions will be made:

The system is operating as intended and meeting performance expectations. The system is not operating as intended and needs corrections or modifications. The users are/are not satisfied with the operation and performance of the system.

The In-Process Review shall be performed to evaluate system performance, user satisfaction with the system, adaptability to changing business needs, and new technologies that might improve the system. This review is diagnostic in nature and can trigger a project to re-enter a previous SDLC phase. Any major system modifications needed after the system has been implemented will follow the SDLC process from planning through implementation.

CHAPTER DISPOSITION PHASE


12.0 12.1 OBJECTIVE

12

TASKS AND ACTIVITIES 12.1.1 Prepare Disposition Plan 12.1.2 Archive or Transfer Data 12.1.3 Archive or Transfer Software Components 12.1.4 Archive Life Cycle Deliverables 12.1.5 End the System in an Orderly Manner 12.1.6 Dispose of Equipment 12.1.7 Prepare Post-Termination Review Report ROLES AND RESPONSIBILITIES DELIVERABLES 12.3.1 Disposition 12.3.2 Post-Termination 12.3.3 Archived System ISSUES FOR CONSIDERATION PHASE REVIEW ACTIVITY OBJECTIVE Plan Report

12.2 12.3

Review

12.4 12.5 12.0

The Disposition Phase will be implemented to either eliminate a large part of a system or as in most cases, close down a system and end the life cycle process. The system in this phase has been declared surplus and/or obsolete and will be scheduled for shutdown. The emphasis of this phase will be to ensure that data, procedures, and documentation are packaged and archived in an orderly fashion, making it possible to reinstall and bring the system back to an operational status, if necessary, and to retain all data records in accordance with company policies regarding retention of electronic records. The Disposition Phase represents the end of the systems life cycle. A Disposition Plan (Appendix C-37) shall be prepared to address all facets of archiving, transferring, and disposing of the system and data. Particular emphasis shall be given to proper preservation of the data processed by the system so that it is effectively migrated to another system or archived in accordance with applicable records management regulations and policies for potential future access. The system disposition activities preserve information not only about the current production system but also about the evolution of the system through its life cycle. 12.1 TASKS AND ACTIVITIES

The objectives for all tasks identified in this phase are to retire the system, software, hardware and data. The tasks and activities actually performed are dependent on the nature of the project. The disposition activities are performed at the end of the

systems life cycle. The disposition activities ensure the orderly termination of the system and preserve vital information about the system so that some or all of it may be reactivated in the future if necessary. Particular emphasis shall be given to proper preservation of the data processed by the system, so that the data are effectively migrated to another system or disposed of in accordance with applicable records management and program area regulations and policies for potential future access. These activities may be expanded, combined or deleted, depending on the size of the system. 12.1.1 Prepare Disposition Plan The Disposition Plan must be developed and implemented. The Disposition Plan will identify how the termination of the system/data will be conducted, and when, as well as the system termination date, software components to be preserved, data to be preserved, disposition of remaining equipment, and archiving of life-cycle products. 12.1.2 Archive or Transfer Data The data from the old system will have to be transferred into the new system or if it is obsolete, archived. 12.1.3 Archive or Transfer Software Components Similar to the data that is archived or transferred, the software components will need to be transferred to the new system, or if that is not feasible, disposed of. 12.1.4 Archive Life Cycle Deliverables A lot of documentation went into developing the application or system. This documentation needs to be archived, where it can be referenced if needed at a later date. 12.1.5 End the System in an Orderly Manner Follow the Disposition Plan for the orderly breakdown of the system, its components and the data within. 12.1.6 Dispose of Equipment If the equipment can be used elsewhere in the organization, recycle. If it is obsolete, notify the property management office to excess all hardware components. 12.1.7 Conduct Post-Termination Review Report This review will be conducted at the end of the Disposition Phase and again within 6 months after disposition of the system by the Project Manager. 12.2 ROLES AND RESPONSIBILITIES

Project Manager. The Project Manager is responsible and accountable for the successful execution of the Disposition Phase activities. Data Administrator. The Disposition Plan may direct that only certain systems data be archived. The Data Administrator would identify the data and assist technical personnel with the actual archive process. The Data Administrator may be involved with identifying data which due to its sensitive nature must be destroyed. They would also be involved with identifying and migrating data to a new or replacement system. Security Managers. The security managers will need to make sure that all access authority has been eliminated for the users. Any users that only use the application should be removed from the system while others that use other applications as well as this one may still need access to the overall system, but not the application being shut-down. If there is another application that is taking the place of this application, the security managers should coordinate with the new security managers. DELIVERABLES

12.3

The following deliverables are initiated and finalized during the Disposition Phase 12.3.1 Disposition Plan The objectives of the plan are to end the operation of the system in a planned, orderly manner and to ensure that system components and data are properly archived or incorporated into other systems. This will include removing the active support by the operations and maintenance organizations. The users will need to play an active role in the transition. All concerned groups will need to be kept informed of the progress and target dates. The decision to proceed with Disposition will be based on recommendations and approvals from an In-Process Review or based on a date (or time period) specified in the System Boundary Document (SBD). Appendix C-37 provides a template for the Disposition Plan. This plan will include a statement of why the application is no longer supported, a description of replacement / upgrade, list of tasks/activities (transition plan) with estimated dates of completion and the notification strategy. Additionally, it will include the responsibilities for future residual support issues such as identifying media alternatives if technology changes; new software product transition plans and alternative support issues (once the application is removed); parallel operations of retiring and the new software product; archiving of the software product, associated documentation, movement of logs, code; and accessibility of archive, data protection identification, and audit applicability. 12.3.2 Post-Termination Review Report A report at the end of the process that details the findings of the Disposition Phase review. It includes details of where to find all products and documentation that has been archived. Appendix C-38 provides a template for the Post-Termination Review Report.

12.3.3 Archived System The packaged set of data and documentation containing the archived application. 12.4 ISSUES FOR CONSIDERATION Update of Security plans for archiving and the contingency plans to reestablish the system should be in place. All documentation about the application, system logs and configuration will be archived along with the data and a copy of the Disposition Plan. 12.5 PHASE REVIEW ACTIVITY The Post-Termination Review shall be performed after the end of this final phase. This phase-end review shall be conducted within 6 months after disposition of the system. The Post-Termination Review Report documents the lessons learned from the shutdown and archiving of the terminated system.

CHAPTER ALTERNATIVE SDLC WORK PATTERNS


13.0 OBJECTIVE

13

13.1 STANDARD SDLC METHODOLOGY (FULL SEQUENTIAL WORK PATTERN) 13.2 13.3 ALTERNATIVE WORK 13.2.1 Alternative Work Pattern Selection PATTERNS

WORK PATTERN DESCRIPTIONS AND EXHIBITS 13.3.1 Reduced Effort (Small Application Development) Work Pattern 13.3.2 Rapid Application Development Work Pattern 13.3.3 Pilot Development Work Pattern 13.3.4 Managed Evolutionary Development Work Pattern 13.3.5 O&M Small-Scale Enhancement Work Pattern 13.3.6 O&M Project Work Pattern 13.3.7 Procurement of Commercial-Off-the-Shelf (COTS) Product OBJECTIVE

13.0

An important objective of an SDLC methodology is to provide flexibility that allows tailoring of the methodology to suit the characteristics of a particular system development effort. One methodology does not fit all sizes and types of system development efforts. For instance, it is not reasonable to expect a very small system development project to produce 35 deliverables and a different approach might be needed for a high-risk system development project that has very uncertain functional and technical requirements at the beginning of development. Therefore, the SDLC methodology provides for a full sequential SDLC work pattern and for alternative SDLC work patterns. It also provides a work pattern to accommodate the acquisition and implementation of a commercial-off-the-shelf (COTS) product. 13.1 STANDARD SDLC METHODOLOGY (FULL SEQUENTIAL WORK PATTERN)

The standard SDLC methodology, as represented in Chapters 3, Initiation Phase, through Chapter 12, Disposition Phase, is termed a full sequential work pattern. This full sequential work pattern creates the maximum number of deliverables (see Figure 1-3, Planning Documents in Chapter 1, Introduction). During the Planning Phase, the Project Manager, in conjunction with the System Proponent, evaluates the documentation of the system concept, as contained in supporting documentation, and determines if the standard SDLC methodology should be used or if an alternative work pattern should be selected instead. The selection of work patterns is based on the selection criteria and the judgement of the involved management. In general, the full sequential work pattern is used if the development type is new or a large modification; if the system development size is large; if the

associated mission is critical; if the system development risks are normal or high; and if complexity of the system development effort is normal or high. In the full sequential work pattern, a project is divided into phases; the phases are conducted sequentially, and the initiation of each phase depends on a decision to continue that is made during a formal review near the end of the immediately preceding phase. This work pattern reflects a desire to follow a very conservative approach to project management. A complete set of system development activities, tasks, and deliverables to be considered for inclusion in this full sequential work pattern is presented in Chapters 3 through 12. 13.2 ALTERNATIVE WORK PATTERNS

Alternative work patterns provide flexibility for the SDLC methodology. An alternative work pattern permits a project planner to tailor a project management plan to meet the specific needs of the project and still conform to SDLC standards. Alternative work patterns provide the opportunity for methods specialists to predefine the permitted tailoring, to ensure that a project planners customization does not overlook necessary activities or include unneeded ones. The alternative work patterns provided are as follows:

Reduced effort work pattern Rapid application development (RAD) work pattern Pilot development work pattern Managed evolutionary development (MED) work pattern O&M small-effort enhancement work pattern O&M project work pattern Commercial-off-the-shelf (COTS) acquisition

The following are operational definitions for terms associated with these types of projects:

Proof of Concept - A project that defines what will be proven and determines both the criteria and methods for the proof of concept. Once the proof of concept is demonstrated, a prototype project may be initiated. Prototype - An ensemble that is suitable for the evaluation of design, performance, and production potential and is not required to exhibit all the properties of the final system. Prototypes are installed in a laboratory setting and are not installed in the field, nor are they available for operational use. They are maintained only long enough to establish technical feasibility. Proof of Technical Feasibility - The result of a successful prototype. Pilot - A system installed in the field to demonstrate that the systems concept is feasible in an operational environment. The pilot system installed has a predetermined subset of functions and is used by a bounded subset of the user population. Its features may not all function smoothly. The goal of the pilot is to provide feedback that will be used to refine the final version of the product. The pilot will be fielded for a preset, limited period of time only to permit pilot systems to be evaluated for operational feasibility.

Proof of Operational Feasibility - The result of a successful pilot. Production - A fully documented system, built according to the SDLC, fully tested, with full functionality, accompanied by training and training materials, and with no restrictions on its distribution or duration of use.

13.2.1 Alternative Work Pattern Selection During the Planning Phase, the Project Manager, along with the System Proponent, evaluates the system concept definition documentation and uses the criteria for selecting either the full sequential work pattern or an alternative work pattern. This section shows the criteria for selecting an alternative work pattern. (Note: Criteria for selection may not be mutually exclusive - for example, complexity and size because size may be a factor of complexity.)

Determine the type of system development: - New development - Modification or enhancement of existing system - Prototype system - Procurement of a COTS system - O&M small scale enhancement - O&M project

Determine the cost of the system development project using the guidelines below:

- Class 1 - Very large projects with estimated development or life cycle costs of more than $20 million - Class 2 - Large projects with estimated development or life cycle costs of between $10 million and $20 million - Class 3 - Mid-size projects with estimated development or life cycle costs of between $2.5 million and $10 million - Class 4 - Small projects with estimated development or life cycle costs of between $500,000 and $2.5 million - Class 5 - O&M enhancement with estimated life cycle costs of less than $500,000.

Determine mission-criticality of system development: - Most critical (C1) to non-mission critical (C5)

Determine the risk of inability to achieve the project objectives from highest (D1) to lowest (D4), based on one or more of the following:

- Risk due to high uncertainty associated with the systems requirements, the technology that the system will employ, or the way that the system will affect the business process - Risk due to high visibility due to public or political attention or requirements - Risk due to highly compressed development time (low turnaround time) because of an emergency or legal, business or political requirements

Determine the complexity of the system development effort from highest (E1) to lowest (E3), based on one or more of the following: - The project affects many organizations or functional areas within company, thus adding a level of difficulty regarding the definition of requirements. - The project results from business process reengineering, dramatically altering the use of information technology. - The project requires new or rapidly advancing technology. - The project requires a long time for development.

Use Exhibit 13-1, Alternative Work Pattern Selection, to select the work pattern appropriate for your project. Exhibit 13-1: Alternative Work Pattern Selection

13.3

Work Pattern Descriptions and Exhibits

The subsequent sections provide descriptions for each alternative work pattern, including tasks and activities, required deliverables, and reviews for each relevant phase. Deliverables for alternative work patterns are often created, revised, and finalized across multiple life cycle phases, as with the full sequential work pattern.

13.3.1 Reduced Effort (Small Application Development) Work Pattern A Reduced Effort work pattern combines some phases, eliminates some of the deliverables otherwise required, and combines some of the reviews to reduce project formality in those situations where a conservative approach is not necessary. This is illustrated in Exhibit 13-2, Reduced Effort Work Pattern. Exhibit 13-3, Reduced Effort Work Pattern Summary, shows, by phase, the tasks, the deliverables required, and the types of reviews required. Chapters 3 through 12 provide identification of the tasks cited in the exhibit. Exhibit 13-2: Reduced Effort Work Pattern

Exhibit 13-3: Reduced Effort Work Pattern Summary

13.3.2 Rapid Application Development Work Pattern In the RAD work pattern, the Initiation, System Concept Development and Planning Phases are conducted according to the full sequential work pattern. The Requirements Analysis and Design Phases are iteratively conducted, using prototyping tools to rough out and incrementally improve the understanding of requirements through a series of design prototypes. The functional requirements document and the system design document are started at the beginning of this activity but are not completed until the end of the Design Phase; these documents use, as much as is possible, the outputs of the prototyping tool to create this documentation. In the process, an initial set of requirements is used to create an initial version of the application, giving users visibility into the look, feel, and navigation capabilities of the application. User evaluation and feedback provide revisions to the statements of requirements, and the process is repeated always involving the user. Typically, three cycle iterations will result in a completely understood set of requirements, but the iteration process can continue until successive differences in requirements are so small as to not be noticeable. Following the completion of design prototyping, a full sequential work pattern is again engaged to accomplish the Development, Integration and Test, and Implementation Phases. The only deviation is the possibility of using some of the generated code from the design prototype to start the Development Process. This is illustrated in Exhibit 13-4, RAD Work Pattern. Exhibit 13-5, RAD Work Pattern Summary, shows, by phase, the tasks, the deliverables required, and the types of reviews required. Exhibit 13-4: RAD Work Pattern

Exhibit 13-5: RAD Work Pattern Summary

13.3.3 Pilot Development Work Pattern In a pilot development work pattern, either a full sequential work pattern or a RAD work pattern is used to go from Initiation through the Development Phase. Decisions regarding full deployment of the application are held until after field trials and evaluations have proven the concept because of the risk involved in the complexity, visibility, and uncertainty of the project. The field trials and evaluations accomplish portions of user acceptance testing and implementation; after they are complete, possibly requiring 1 or more years, the remainder of implementation is completed. This means that migration to the Operations and Maintenance Phase is possibly deferred for more than a year. This is illustrated in Exhibit 13-6, Pilot Development Work Pattern. Exhibit 13-7, Pilot Development Work Pattern Summary, shows, by phase, the tasks, the deliverables required, and the types of reviews required. Exhibit 13-6: Pilot Development Work Pattern

Exhibit 13-7: Pilot Development Work Pattern Summary

13.3.4 Managed Evolutionary Development Work Pattern The MED approach is particularly suited to situations where existing business processes will be altered considerably and where the full set of detailed functional requirements cannot be reliably defined early in the development life cycle. The MED discipline supports iterative definition, development, and deployment of subsystems by defining system-level functional and data requirements and a modular system architecture, which allows for subsequent refinement, development, and deployment of subsystems that can evolve to meet future business needs. Frequently, a particular release level containing partial, but not complete, functionality is referred to as a build. During the Planning and Requirements Analysis Phases, an entire series of successive builds is planned, each of which gets designed, developed, tested, and implemented. This is illustrated in Exhibit 13-8, MED Work Pattern. Exhibit 13-9, MED Work Pattern Summary, shows, by phase, the tasks, the deliverables required, and the types of reviews required. Exhibit 13-8: MED Work Pattern

MED Incremental and Evolutionary Strategies. The MED-based development process combines an evolutionary development strategy with incremental delivery. System development using MED proceeds by defining a bounded vision of a future system and then iteratively refining the reengineered business processes, information system requirements, and technical architecture. The incremental delivery strategy within MED is used to encapsulate part of the overall system as a subsystem to be built and deployed. Subsystems are constructed when there is sufficient confidence that they will provide a cohesive, user-validated set of business functionality. As usage experience is obtained, lessons learned are fed back into each subsystem by improving each in subsequent versions of the system. MED Program Management. The Planning Phase is where the Project Manager, with approval of the System Proponent, determines that the functional requirements will best be fulfilled by assigning them to distinct, but functionally related subsystems. The Requirements Analysis Phase will be split into a Systems Requirements Analysis Phase to define overall system requirements and architecture and a Subsystem Requirements Analysis Phase for detailed definition of each subsystem. At the completion of the

Requirements Analysis Phase, each subsystem begins its own branch of the life cycle to define a target subsystem architecture, business process, and requirements. Risk Management within the Context of MED. The order in which a MEDbased work pattern proceeds is heavily influenced by risk. MED is designed to focus explicitly on the development decisions around risk that is derived from uncertainty about the target system in the areas of business process, system requirements, technical architecture, cost, and schedule. This risk management strategy addresses two fundamental time-related risks of uncertainty and dependence. Uncertainty is reduced by acting on gathered information, such as from prototypes, simulations, studies, or models; dependence is eliminated by structuring parts of the system to be independently deployed as subsystems. To accomplish this, the Project Manager must define the target system. This consists of determining the system boundaries, specifying the target characteristics, assessing system risks and defining and executing risk mitigation activities; and developing subsystems, which is done as a project within the overall program. Projects are initiated once system-level risk is reduced to an acceptable level as documented in the risk management plan. Reviews and Approvals. When using the MED Work Pattern, there is an explicit milestone for the system-level requirements and a milestone for each subsystem requirement.

Exhibit 13-9: MED Work Pattern Summary

Acknowledgment for the MED Methodology. The U.S. Patent and Trademark Office developed and documented the MED methodology. A full description of MED may be found in: Managed Evolutionary Development Guidebook, Second Edition, June 1993. This is a document produced by the U.S. Patent and Trademark Office. For further information contact: The Office of the Assistant Commissioner for Information Systems, U.S. Patent and Trademark Office, 2121 Crystal Drive, Arlington, VA 22202, (Fax) 703-305-9369.

13.3.5 O&M Small-Scale Enhancement Work Pattern This work pattern is appropriate for small scale revisions to existing applications. Each O&M enhancement must be initiated by the Project Manager and must be tracked by an SCR. Typically, multiple O&M enhancements will be bundled into a planned software release identified by a version number. The intent is to limit the use of this work pattern to simple, small-scale changes that will require no more than 160 labor hours for Initiation, Concept Development, Requirements Analysis, Design, Development, Integration & Testing, and Implementation, including any needed updates to product documentation and any required user training. Exhibit 13-10, O&M Enhancement Work Pattern Summary shows, by phase, the tasks, deliverables, and types of reviews required. Exhibit 13-10: O&M Enhancement Work Pattern Summary

13.3.6 O&M Project Work Pattern This work pattern is appropriate for O&M Maintenance for existing applications. Each O&M project must be specifically organized and staffed for the purpose of conducting corrective, adaptive, or perfective maintenance on installed applications, including conversions needed to support upgrades and/or changes to the hardware and software operating environment. User help desk support and other small O&M enhancements may also be provided and delivered by the assigned project team. System revisions and conversions will be accomplished on an as-needed basis at a fixed level of support and within a corresponding fixed annual operating budget. The intent is to limit the use of this work pattern to ongoing support activities that

typically do not fit within the definition of a systems development or enhancement project. Exhibit 13-11, O&M Project Work Pattern Summary, shows, by activity, the task, deliverables, and types of reviews required. Exhibit 13-11: O&M Project Work Pattern Summary

13.3.7 Procurement of Commercial-Off-the-Shelf (COTS) Product This effort is designed for the purchase of COTS products to be used by company within the framework of existing or planning systems, see Exhibit 13-12. These COTS products may be used at a single site or they may be installed to operate across the company or a significant portion of the company. The table in Exhibit 13-1: Alternative Work Pattern Selection, indicates when this pattern is to be used. Exhibit 13-12: COTS Acquisition Work Pattern Summary

NOTE: May not be required, depending on the nature of the COTS product. This
document will be more likely required for systems made up entirely of COTS products that require significant customization and integration. The decision to develop this document should be made during this life cycle phase. Additional Work Patterns Project teams should endeavor to follow the full sequential work pattern or one of the alternatives described above. However, from time to time, new project environments or system requirements into which these work patterns will not fit will evolve. In those cases, the Project Manager, working with the QA Manager, shall develop and document proposed new work patterns, submit them as updates to this guidance document, and use them as the basis of the project management plans.

APPENDIX A GLOSSARY
-AAcceptance Test - Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. See User Acceptance Test. Accreditation - Formal declaration by an accrediting authority that a computer system is approved to operate in a particular security mode using a prescribed set of safeguards. Acquisition Plan - A formal document showing how all hardware, software, and telecommunications capabilities, along with resources, are to be obtained during the life of the project. Activity - A unit of work to be completed in order to achieve the objectives of a work breakdown structure. See Work Breakdown Structure. In process modeling, an activity requires inputs and produces outputs. See Input/Output. Adaptability - The ease with which software satisfies differing system constraints and user needs. Adaptive Maintenance - Maintenance performed to change a system in order to keep it usable in a changed environment. Alias - A name of a data entity or data attribute that is different from its official name. Allocated Baseline - The approved documentation that describes the design of the functional and interface characteristics that are allocated from a higher-level configuration item. See Baseline. Alternative Work Patterns - Work pattern that permits tailoring a project plan to meet the specific needs of the project and still conform to SDLC standards. Application - A system providing a set of services to solve some specific user problem. Application Model - A model used to graphically and textually represent the required data and processes within the scope of the application development project. Application Software - Software specifically developed to perform a specific type of work; for example, a word processor. Compare to System Software. Architecture - The structure of a computer system, either a part or the entire system; can be hardware, software, or both.

Audit - A formal review of a project (or project activity) for the purpose of assessing compliance with contractual obligations. Availability - The degree to which a system (or system component) is operational and accessible when required for use. -BBackup - v. To copy software files onto a different media that can be sorted separately from the original files and used to restore the original files, if needed. The act of creating these files. n. The set of copied files. Baseline - A work product (such as software or documentation) that has been formally reviewed, approved, and delivered and can only be changed through formal change control procedures. See Allocated Baseline, Functional Baseline, Operational Baseline, Product Baseline. Benchmark - A standard against which measurements or comparisons can be made. Bottom-up - The process of designing a system by designing the low-level components first; then integrating them into large subsystems until the complete system is designed; bottom-up testing tests these low-level components first, using software drivers to simulate the higher level components. See Top-down. Build - An operational version of a software product incorporating a specified subset of the complete system functionality. See Version. Business Process Reengineering - The redesign of an organization, culture, and business processes to achieve significant improvements in costs, time, service, and quality. -CCapability - A measure of the expected use of a system. Capacity - A measure of the amount of input a system could process and/or amount of work a system can perform; for example, number of users, number of reports to be generated. Certification - Comprehensive analysis of the technical and non-technical security features and other safeguards of a system to establish the extent to which a particular system meets a set of specified security requirements. Change - In Configuration Management, a formally recognized revision to a specified and documented requirement. See Change Control, Change Directive, Change Impact Assessment, Change Implementation Notice. Change Control - In Configuration Management, the process by which a change is proposed, evaluated, approved (or disapproved), scheduled, and tracked. See Change, Change Directive, Change Impact Assessment, Change Implementation Notice.

Change Control Documents - Formal documents used in the configuration management process to track, control, and manage the change of configuration items over the systems development or maintenance life cycle. See System Change Request, Change Impact Assessment, Change Directive, and Change Implementation Notice. Change Directive - The formal Change Control Document used to implement an approved change. See Change Control Documents. Change Impact Assessment - The formal Change Control Document used to determine the effect of a proposed change before a decision is made to implement it. See Change Control Documents. Change Implementation Notice - The formal Change Control Document used to report the actual implementation of a change in a system. See Change Control Documents. Client/Server - A network application in which the end-user interaction with the system (server) is through a workstation (client) that executes some portion of the application. Code - v. To transform the system logic and data from design specifications into a programming language. n. The computer program itself; pseudo-code is code written in an English-like logical representation, source code is code written in a programming language, object code is code written in machine language. Compatibility - A measure of the ability of two or more systems (or system components) to exchange information and use the information that has been exchanged. Same as Interoperability. Component - General term for a part of a software system (hardware or software). See Product. Computer-aided Software Engineering - An electronic tool that is used to assist in the design, development, and coding of software. See Tools. Computer System Security Officer - The person who ensures that all Computer and Telecommunications Security (C&TS) activities are undertaken at the user site. Includes security activities for planning; awareness training; risk management; configuration management; certification and accreditation; compliance assurance; incident reporting; and guidance and procedures. Concept of Operations - A formal document that describes the user's environment and process relative to a new or modified system; defines the users, if not already known. Called a CONOPS. Configuration - The functional and/or physical collection of hardware and software components as set forth in formal documentation. Also, the requirements, design, and implementation that define a particular version of a system (or system component). See Configuration Control, Configuration Item, Configuration Management, Configuration Management Plan, Configuration Status Accounting.

Configuration Audit - Formal review of a project for the purpose of assessing compliance with the Configuration Management Plan. Configuration Control - The process of evaluating, approving or (disapproving), and coordinating changes to hardware/software configuration items. Configuration Control Board - The formal entity charged with the responsibility of evaluating, approving (or disapproving), and coordinating changes to hardware/software configuration items. Configuration Item - An aggregation of hardware and/or software that satisfy an end-use function and is designated by the customer for configuration management; treated as a single entity in the configuration management process. A component of a system requiring control over its development throughout the life-cycle of the system. Configuration Management - The discipline of identifying the configuration of a hardware/software system at each life cycle phase for the purpose of controlling changes to the configuration and maintaining the integrity and traceability of the configuration through the entire life cycle. Configuration Management Plan - A formal document that establishes formal configuration management practices in a systems development/maintenance project. See Configuration Management. Configuration Status Accounting - The recording and reporting of the information that is needed to effectively manage a configuration; including a listing of the approved configuration identification, status of proposed changes to the configuration, and the implementation status of approved changes. See Configuration. Contingency Plan - A formal document that establishes continuity of operations processes in case of a disaster. Includes names of responsible parties to be contacted, data to be restored, and location of such data. Conversion - The process of converting (or exchanging) data from an existing system to another hardware or software environment. Conversion Plan - A formal document that describes the strategies involved in converting data from an existing system to another hardware or software environment. Corrective Maintenance - Maintenance performed to correct faults in hardware or software. Correctness - The degree to which a system or component is free from faults in its specification, design, and implementation. Cost Analysis - Presents the costs for design, development, installation, operation and maintenance, and consumables for the system to be developed. Cost-Benefit Analysis - The comparison of alternative courses of action, or alternative technical solutions, for the purpose of determining which alternative would

realize the greatest cost benefit; cost-benefit analysis is also used to determine if the system development or maintenance costs still yield a benefit or if the effort should stop. Cost Estimate - the process of determining the total cost associated with a software development or maintenance project, to include the effort, time, and labor required. Criteria - A standard on which a decision or judgement may be based; for example, acceptance criteria to determine whether or not to accept a system. Critical Path - Used in project planning; the sequence of activities (or tasks) that must be completed on time to keep the entire project on schedule; therefore, the time to complete a project is the sum of the time to complete the activities on the critical path. Critical Review Board - A formal board that guides and monitors the development of requirements that affect current and future systems. Customer - An individual or organization who specifies the requirements for and formally accepts delivery of a new or modified system; one who pays for the system. The customer may or may not be the user; see User. -DData Dictionary - A repository of information about data, such as its meaning, relationships to other data, origin, usage and format. A data dictionary manages data categories such as aliases, data elements, data records, data structure, data store, data models, data flows, data relationships, processes, functions, dynamics, size, frequency, resource consumption and other user-defined attributes. Database Administrator - The person responsible for managing data at a logical level, namely data definitions, data policies and data security. Database - A collection of logically related data stored together in one or more computerized files; an electronic repository of information accessible via a query language interface. Database Management System - A software system that controls storing, combining, updating, retrieving, and displaying data records. Data Store - A repository of data; a file. Demonstration - A procedure to verify system requirements that cannot be tested otherwise. Deliverable - A formal product that must be delivered to (and approved by) the customer; called out in the Task Order. Delivered System Documentation - Includes the Software Development Document, User Manual, Maintenance Manual, Operations Manual.

Design Phase - The period of time in the systems development life cycle during which the designs for architecture, software components, interfaces, and data are created, documented, and verified to satisfy system requirements. Development Phase - The period of time in the systems development life cycle to convert the deliverables of the Design Phase into a complete system. Disposition Phase - The time when a system has been declared surplus and/or obsolete and the task performed is either eliminated or transferred to other systems. Disposition Plan - A formal plan providing the full set of procedures necessary to end the operation or the system in a planned, orderly manner and to ensure that system components and data are properly archived or incorporated into other systems. Document - Written and/or graphical information describing, defining, specifying, reporting, or certifying activities, requirements, procedures, reviews, or results. See Product. -EEffectiveness - The degree to which a system's features and capabilities meet the user's needs. Efficiency - The degree to which a system or component performs its designated functions with minimum consumption of resources. Element - A subsystem, component, or unit; either software or hardware, as defined by the project. Enhancement - Maintenance performed to provide additional functional or performance requirements. Entity - Represents persons, places, events, things, or abstractions that are relevant to the company and about which data are collected and maintained. -FFault Tolerance - The ability of a system (or system component) to continue normal operation despite the presence of hardware or software faults. Feasibility - The extent to which the benefits of a new or enhanced system will exceed the total costs and also satisfies the business requirements. Feasibility Study - A formal study to determine the feasibility of a proposed system (new or enhanced) in order to make a recommendation to proceed or to propose alternative solutions. Field Test - Testing that is performed at the user site. Fielded System - An operational system that is installed at the user site.

Full Sequential - The systems development work pattern defined by the nine life cycle phases described in the SDLC Guidance Document. Functionality - The relative usefulness of a functional requirement as it satisfies a business need. Functional Baseline - The approved documentation that describes the functional characteristics of the system, subsystem, or component. See Baseline. Functional Configuration Audit - An audit to ensure that the functional requirements have been met by the delivered configuration item. See Audit. Functional Requirement - A requirement that specifies a function (activity or behavior, based on a business requirement) that the system (or system component) must be capable of performing. Functional Requirements Document - A formal document of the business (functional) requirements of a system; the baseline for system validation. Functional Test - Testing that ignores the internal mechanism of a system (or system component) and focuses solely on the outputs generated in response to selected inputs and execution conditions. Same as black-box testing. -GGantt Chart - A list of activities plotted against time, showing start time, duration, and end time; also known as a bar chart. -HHardware - The physical portion of a system (or subsystem), including the electrical components. Compare to Software. Host - The computer that controls communications in a network that administers a database; the computer on which a program or file is installed; a computer used to develop software intended for another computer. See Target. -IImplementation - Installing and testing the final system, usually at the user (field) site; the process of installing the system. Implementation Phase - The period of time in the systems development life cycle when the system is installed, made operational, and turned over to the user (for the beginning of the Operations and Maintenance Phase). Implementation Plan - A formal document that describes how the system will be installed and made operational.

Information Technology - The application of engineering solutions in order to develop computer systems that process data. In-Process Review - Formal review conducted (usually annually) during the Operations and Maintenance Phase to evaluate system performance, user satisfaction with the system, adaptability to changing business needs, and new technologies that might improve the system. In-Process Review Report - A formal document detailing the findings of the InProcess Review. See In-Process Review. Input/Output - The process of entering information into a system (input) and its subsequent results (output). A hardware device that enables input (for example, a keyboard or card reader) and output (for example, a monitor or printer). Collectively known as I/O. Inspection - A semiformal to formal technique in which software requirements, design, or code are examined in detail by a person or group other than the originator to detect errors. See Peer Review, Walk-through. Integrated Product Team - A multidisciplinary group of people who support the Project Manager in the planning, execution, delivery and implementation of life cycle decisions for the project. Integration Document - A formal document that describes how the software components, hardware components, or both are combined and the interaction between them. Integration and Test Phase - Life cycle phase during which subsystem integration, system, security, and user acceptance testing are conducted; done prior to the Implementation Phase. Integration Test - Testing in which software components, hardware components, or both are combined and tested to evaluate the interaction between them. Integrity - The degree to which a system (or system component) prevents unauthorized access to, or modification of, computer programs or data. Iterative - A procedure in which repetition of a sequence of activities yields results successively close to the desired state; for example, an iterative life cycle in which two or more phases are repeated until the desired product is developed. Interface - To interact or communicate with another system (or system component). An interface can be software and/or hardware. See User Interface. Interface Control Document - Specifies the interface between a system and an external system(s).

Interoperability - A measure of the ability of two or more systems (or system components) to exchange information and use the information that has been exchanged. Same as Compatibility. Information Technology Systems Security Certification and Accreditation - A formal set of documents showing that the installed security safeguards for a system are adequate and work effectively. -LLessons Learned - A formal or informal set of examples collected from experience (for example, experience in system development) to be used as input for future projects to know what went well and what did not; collected to assist other projects. Library - A configuration controlled repository for system components (for example, documents and software). Life Cycle - All the steps or phases a project passes through during its system life; from concept development to disposition. There are nine life cycle phases in the SDLC. -MMaintainability - The ease with which a software system (or system component) can be modified to correct faults, improve performance, or other attributes, or adapt to a changed environment. Maintenance - In software engineering, the activities required to keep a software system operational after implementation. See Adaptive Maintenance, Corrective Maintenance, Enhancement, Perfective Maintenance. Maintenance Manual - A formal document that provides systems maintenance personnel with the information necessary to maintain the system effectively. Maintenance Review - A formal review of both the completed and pending changes to a system with respect to the benefits achieved by completing the recommended changes; also provides information about the amount of maintenance required based on activity to date. Part of the Post-Implementation Review Report. Measurement - In project management, the process of collecting, analyzing and reporting metrics data. Methodology - A set of methods, procedures, and standards that define the approach for completing a system development or maintenance project. Metrics - A quantitative measure of the degree to which a system, component, or process possesses a given attribute. Migration - Porting a system, subsystem, or system component to a different hardware platform.

Milestone - In project management, a scheduled event that is used to measure progress against a project schedule and budget. Mission - The goals or objectives of an organization or activity. Model - A simplified representation or abstraction (for example, of a process, activity, or system) intended to explain its behavior. Module - In system design, a software unit that is a logically separate part of the entire program. See Unit. -NNon-technical - Relating to agreements, conditions, and/or requirements affecting the management activities of a project. Compare to Technical. -OOperational Baseline - Identifies the system accepted by the users in the operational environment after a period of onsite test using production data. See Baseline. Operations Manual - A formal document that provides a detailed operational description of the system and its interfaces. Operations and Maintenance (O&M) Phase - The period of time in the systems development life cycle during which a software product is employed in its operational environment, monitored for satisfactory performance, and modified as necessary to correct problems or to respond to changing requirements. -PPeer Review - A formal review where a product is examined in detail by a person or group other than the originator. See Inspection, Walk-through. Perfective Maintenance - Software maintenance performed to improve the performance, maintainability, or other attributes of a computer program. Performance Measures - A category of quality measures that address how well a system functions. Performance Measurement and Capacity Planning - A set of procedures to measure and manage the capacity and performance of information systems equipment and software. Performance Review - Formal review conducted to evaluate the compliance of a system or component with specified performance requirements. Phase - A defined stage in the systems development life cycle; there are nine phases in the full, sequential life cycle.

Phase Review - A formal review conducted during a life cycle phase; usually at the end of the phase or at the completion of a significant activity. Physical Configuration Audit - An audit to ensure that all physical attributes listed in the design requirements have been met by the configuration item being delivered. Pilot - An alternative work pattern to develop a system to demonstrate that the concept is feasible in an operational environment. Pilots are used to provide feedback to refine the final version of the product and are fielded for a preset, limited period of time. Compare to a Prototype. Planning Phase - The period of time in the systems development life cycle in which a comprehensive plan for the recommended approach to the systems development or maintenance project is created. Follows the Systems Concept Development Phase, in which the recommended approach is selected. Post-Implementation Review - A formal review to evaluate the effectiveness of the systems development effort after the system is operational (usually for at least six months). Post-Implementation Review Report - A formal document detailing the findings of the Post-Implementation Review. See Post-Implementation Review. Post-Termination Review - A formal review to evaluate the effectiveness of a system disposition. Post-Termination Review Report - A formal document detailing the findings of the Post-Termination Review. See Post-Termination Review. Privacy Act Notice - For any system that has been determined to be an official System of Records (in terms of the criteria established by the Privacy Act), a special notice must be published in the Federal Register that identifies the purpose of the system; describes its routine use and what types of information and data are contained in its records; describes where and how the records are located; and identifies who the System Manager is. Procedure - A series of steps (or instructions) required to perform an activity. Defines "how" to perform an activity. Compare to Process. Process - A finite series of activities as defined by its inputs, outputs, controls (for example, policy and standards), and resources needed to complete the activity. Defines "what" needs to be done. Compare to Procedure. Process Model - A graphical representation of a process. Process Review - A formal review of the effectiveness of a process. Product - General term for an item produced as the result of a process; can be a system, subsystem, software component, or a document.

Product Baseline - The set of completed and accepted system components and the corresponding documentation that identifies these products. See Baseline. Production - A fully documented system, built according to the SDLC, fully tested, with full functionality, accompanied by training and training materials, and with no restrictions on its distribution or duration of use. Product Review - A formal review of a product software (or document) to determine if it meets its requirements. Can be conducted as a peer review. Program Specification - A description of the design logic in a software component, generally using pseudo-code. See Code. Project - The complete set of activities associated with all life cycle phases needed to complete a systems development or maintenance effort from start to finish (may include hardware, software, and other components); the collective name for this set of activities. Typically a project has its own funding, cost accounting, and delivery schedule. Project Management - The process of planning, organizing, staffing, directing, and controlling the development and/or maintenance of a system. Project Management Plan - A formal document detailing the project scope, activities, schedule, resources, and security issues. The Project Management Plan is created during the Planning Phase and updated through the Disposition Phase. Project Manger - The person with the overall responsibility and authority for the day-to-day activities associated with a project. Prototype - A system development methodology to evaluate the design, performance, and production potential of a system concept (it is not required to exhibit all the properties of the final system). Prototypes are installed in a laboratory setting and not in the field, nor are they available for operational use. Prototypes are maintained only long enough to establish feasibility. Compare to a Pilot. -QQuality - The degree to which a system, component, product, or process meets specified requirements. Quality Assurance - A discipline used by project management to objectively monitor, control, and gain visibility into the development or maintenance process. Quality Assurance Plan - A formal plan to ensure that delivered products satisfy contractual agreements, meet or exceed quality standards, and comply with approved systems development or maintenance processes. Quality Assurance Review - A formal review to ensure that the appropriate Quality Assurance activities have been successfully completed, held when a system is ready for implementation.

-RRapid Application Development - In a RAD work pattern, the Requirements Definition and Design phases are iteratively conducted; in this process, a rough set of requirements is used to create an initial version of the system, giving users visibility into the look, feel, and system capabilities. User evaluation and feedback provide revisions to the requirements, and the process is repeated until the requirements are considered to be complete. Records Disposition Schedule - Federal regulations require that all records no longer needed for the conduct of the regular business of the agency be disposed of, retired, or preserved in a manner consistent with official Records Disposition Schedules. Records Management - The formal set of system records (for example, files, data) that must be retained during the Disposition Phase; the plan for collecting and storing these records. Recoverability - The ability of a software system to continue operating despite the presence of errors. Reengineering - Rebuilding a process to suit some new purpose; for example, a new business process. Regression Test - In software maintenance, the rerunning of test cases that previously executed correctly in order to detect errors introduced by the maintenance activity. Release - A configuration management activity wherein a specific version of software is made available for use. Reliability - The ability of a system (or system component) to perform its required functions under stated conditions for a specified period of time. Requirement - A capability needed by a user; a condition or capability that must be met or possessed by a system (or system component) to satisfy a contract, standard, specification, or other formally imposed documents. Requirements Analysis Phase - The period of time in the systems development life cycle during which the requirements for a software product are formally defined, documented and analyzed. Requirements Management - Establishes and controls the scope of system development efforts and facilitates a common understanding of system capabilities between the System Proponent, developers, and future users. Requirements Traceability Matrix - Provides a method for tracking the functional requirements and their implementation through the development process. Resource - In management, the time, staff, capital and money available to perform a service or build a product; also, an asset needed by a process step to be performed.

Reverse Engineering - A software engineering approach that derives the design and requirements of a system from its code; often used during the maintenance phase of a system with no formal documentation. Review - A formal process at which an activity or product (for example, code, document) is presented for comment and approval; reviews are conducted for different purposes, such as peer reviews, user reviews, management reviews (usually for approval) or done at a specific milestone, such as phase reviews (usually to report progress). Review Report - A formal document that records the results of a review. Risk - A potential occurrence that would be detrimental to the project; risk is both the likelihood of the occurrence and the consequence of the occurrence. Risk Assessment - The process of identifying areas of risk; the probability of the risk occurring, and the seriousness of its occurrence; also called risk analysis. Risk Management - The integration of risk assessment and risk reduction in order to optimize the probability of success (that is, minimize the risk). Risk Management Plan - A formal document that identifies project risks and specifies the plans to reduce these risks. Role - A defined responsibility (usually task) to be carried out by one or more individuals. -SScope - The established boundary (or extent) of what must be accomplished; during planning, this defines what the project will consist of (and just as important, what the project will not consist of). Security - The establishment and application of safeguards to protect data, software, and hardware from accidental or malicious modification, destruction, or disclosure. Security Risk Assessment - Tool that permits developers to make informed decisions relating to the acceptance of identified risk exposure levels or implementation of costeffective measures to reduce those risks. See Requirements Analysis Phase. Security Test - A formal test performed on an operational system, based on the results of the security risk assessment in order to evaluate compliance with security and data integrity guidelines, and address security backup, recovery, and audit trails. Also called Security Testing and Evaluation (ST&E). Segment - A major part of a larger system or subsystem; in software, a self-contained portion of a computer program. Sensitive System - A system or subsystem that requires an IT Systems Security Certification and Accreditation; contains data requiring security safeguards.

Sensitivity Analysis - Assesses the potential effect on inputs (costs) and outcomes (benefits) that depends on the relative magnitude of change in certain factors or assumptions. Software - Computer programs (code), procedures, documentation, and data pertaining to the operation of a computer system. Compare to Hardware. Software Development Document - Contains all of the information pertaining to the development of each unit or module, including the test cases, software, test results, approvals, and any other items that will help explain the functionality of the software. Standard - Mandatory requirements to prescribe a disciplined uniform approach to software development and maintenance activities. Subsystem - A collection of components that meets the definition of a system, but is considered part of a larger system. See System. Survivability - A measure of the ability of a system to continue to function, especially in the presence of errors. System - A collection of components (hardware, software, interfaces) organized to accomplish a specific function or set of functions; generally considered to be a selfsufficient item in its intended operational use. System Administrator - The person responsible for planning a system installation and use of the system by other users. System Boundary Document - A formal document created during the System Concept Development Phase which lists the business case for initiating the system or project. It contains responsible persons, projected costs associated with the investment, risks, assumptions, scope, schedule, milestones, etc. System Component - Any of the discrete items that comprise a system or subsystem. See Subsystem, System. System Change Request - The formal Change Control Document procedure used to request a change to a system baseline, provide information concerning the requested change, and act as the documented approval mechanism for the change. See Change Control Documents. System Concept Development Phase - Phase that begins after the need or opportunity has been identified in the Initiation Phase. The approaches for meeting this need are reviewed for feasibility and appropriateness (for example, cost-benefit analysis) and documented in the SBD. System Design Document - A formal document that describes the system architecture, file and database design, interfaces, and detailed hardware/software design; used as the baseline for system development.

Systems of Records Notice - Notice that is required to be published for any system that has been determined to be an official System of Records (in terms of the criteria established by the Privacy Act). System Proponent - The organization benefitting from or requesting the project; frequently thought of as the "customer" for that project. Systems Administration Manual - A manual that serves the purpose of an Operations Manual in a distributed (client/server) application. See Operations Manual, Client/Server. Systems Analysis - In systems development, the process of studying and understanding the requirements (customer needs) for a system in order to develop a feasible design. Systems Development Life Cycle - A formal model of a hardware/software project that depicts the relationship among activities, products, reviews, approvals, and resources. Also, the period of time that begins when a need is identified (inititation) and ends when the system is no longer available for use (disposition). Systems Manager - The individual, or group, responsible for post-implementation system maintenance, configuration management, change control, and release control. This may or may not include members of the development team. System Security Plan - A formal document that establishes the processes and procedures for identifying all areas where security could be compromised within the system (or subsystem). System Software - Software designed to facilitate the operation of a computer system and associated computer programs (for example, operating systems, code compilers, utilities). Compare to Application Software. System Test - The process of testing an integrated hardware/software system to verify that the system meets its documented requirements. -TTailor - A formal procedure to modify a process, standard, procedure, or work pattern to fit a specific use or business need. Target - The computer that is the destination for a host communication; See Host. In programming, a language into which another language is to be translated. Task - In project management, the smallest unit of work subject to management accountability; a work assignment for one or more project members fulfilling a role, as defined in a work breakdown structure. Technical - Relating to agreements, conditions, and/or requirements affecting the functionality and operation of a system. Compare to non-technical.

Test - The process of exercising the product to identify differences between expected and actual results and performance. Typically testing is bottom-up: unit test, integration test, system test, and acceptance test. Test Case - A specific set of test data and associated procedures developed for a particular test. Test Files/Data - Files/data developed for the purpose of executing a test; becomes part of a test case. See Test Case. Testability - A metric used to measure the characteristics of a requirement that enable it to be verified during a test. Test Analysis Approval Determination - The form attached to the Test Analysis Report as a final result of the test reviews for all testing levels above the Integration test. See Test Analysis Report. Test Analysis Report - Formal documentation of the software testing as defined in the Test Plan. Test and Evaluation (T&E) - T&E occurs during all major phases of the development life cycle, beginning with system planning and continuing through the operations and maintenance phase, ensures standardized identification, refinement, and traceability of the requirements as such requirements are allocated to the system components. Test and Evaluation Master Plan - The formal document that identifies the tasks and activities so the entire system can be adequately tested to assure a successful implementation. Test Problem Report - Formal documentation of problems encountered during testing; the form is attached to the Test Analysis Report. See Test Analysis Report. Test Readiness Review - A formal phase review to determine that the test procedures are complete and to ensure that the system is ready for formal testing. Tools - Software application products that assist in the design, development, and coding of software. Also called CASE tools; see Computer-aided Software Engineering. Top-down - An approach that takes the highest level of a hierarchy and proceeds through progressively lower levels; compare to Bottom-up. Traceability - In requirements management, the identification and documentation of the derivation path (upward) and allocation path (downward) of requirements in the hierarchy. Training - The formal process of depicting, simulating, or portraying the operational characteristics of a system or system component in order to make someone proficient in its use.

Training Plan - A formal document that outlines the objectives, needs, strategy, and curriculum to be addressed for training users of the new or enhanced system. -UUnit - the smallest logical entity specified in the design of a software system; must be of sufficient detail to allow the code to be developed and tested independent of other units. See Module. Unit Test - In testing, the process of ensuring that the software unit executes as intended; usually performed by the developer. Upgrade - A new release of a software system for the purpose of including a new version of one or more system components. Usability - The capability of the software product to be understood, learned, used and be of value to the user, when used under specified conditions. User - An individual or organization who operates or interacts directly with the system; one who uses the services of a system. The user may or may not be the customer. See Customer. User Acceptance Test - Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the user to determine whether or not to accept the system. See Acceptance Test. User Interface - The software, input/output (I/O) devices, screens, procedures, and dialogue between the user of the system (people) and the system (or system component) itself. See Interface. User Manual - A formal document that contains all essential information for the user to make full use of the new or upgraded system. User Satisfaction Review - A formal survey used to gather the data needed to analyze current user satisfaction with the performance capabilities of an existing system or application; administered annually, or as needed. -VValidation - The process of determining the correctness of the final product, system, or system component with respect to the user's requirements. Answers the question, "Am I building the right product?" Compare to Verification. Verifiability - A measure of the relative effort to verify a requirement; a requirement is verifiable only if there is a finite cost-effective process to determine that the software product or system meets the requirement. Verification - The process of determining whether the products of a life cycle phase fulfill the requirements established during the previous phase; answers the question, "Am I building the product right?" Compare to Validation.

Verification and Validation Plan - A formal document that describes the process to verify and validate the requirements. Created during the Planning Phase and updated throughout the SDLC. Version - An initial release or re-release of a computer software configuration item, associated with a complete compilation or recompilation of the computer software configuration item; sometimes called a build. See Build. Version Description Document - A formal document that describes the exact version of a configuration item and its interim changes. It is used to identify the current version; provides a "packing list" of what is included in the release. Volatility - In requirements management, the degree to which requirements are expected to change throughout the systems development life cycle; opposite of stability. -WWalk-through - A software inspection process, conducted by peers of the software developer, to evaluate a software component. See Inspection, Peer Review. Work Breakdown Structure - In project management, a hierarchical representation of the activities associated with developing a product or executing a process; a list of tasks; often used to develop a Gantt Chart. Work Pattern - The complete set of life cycle phases, activities, deliverables, and reviews required to develop or maintain a software product or system; a formal approach to systems development.

APPENDIX B ACRONYMS
-AABL Allocated Baseline ACSN Advanced Change Study Notice ADP Automated Data Processing AIS Automated Information System APB Acquisition Project Baseline -BBPR Business Process Reengineering -CCASE Computer-Aided Software Engineering CBA Cost-Benefit Analysis CD-ROM Compact Disk-Read Only Memory CI Configuration Item CM Configuration Management CMM Capability Maturity Model COCOMO Constructive Cost Model CONOPS Concept of Operations COTR Contracting Officers' Technical Representative COTS Commercial Off-The-Shelf CRUD Create, Read, Update, Delete CSA Configuration Status Accounting CSCI Computer Software Configuration Items

CWBS Contract Work Breakdown Structure -DDBA Database Administrator DBMS Database Management System -EECP Engineering Change Proposal -FFAR Federal Acquisition Regulations FASA Federal Acquisition Streamlining Act FBL Functional Baseline FCA Functional Configuration Audit FIPS Federal Information Processing Standards FOIA/PA Freedom of Information Act/Privacy Act FRD Functional Requirements Document -GGAO General Accounting Office GPRA Government Performance and Results Act GUI Graphical User Interface -HHCI Hardware Configuration Items -IICD Interface Control Document IEEE/EIA Institute of Electrical and Electronics Engineers/Electronic Industries Assoc. IMSS Information Management and Security Staff IPT Integrated Product Team

IRM Information Resources Management ISO International Standard Organization IT Information Technology ITA Information Technology Architecture ITIB Information Technology Investment Board -JJCL Job Control Language JMD Justice Management Division -LLAN Local Area Network -OOBDB Offices, Boards, Divisions and Bureaus OMB Office of Management and Budget -PPBL Product Baseline PCA Physical Configuration Audit PERT Program Evaluation Review Technique PRRA Paperwork Reduction Reauthorization Act -QQA Quality Assurance QAR Quality Action Report -RRM Risk Management RTM Requirements Traceability Matrix -S-

SBD System Boundary Document SBU Sensitive-but-Unclassified SCR System Change Request SDLC Systems Development Life Cycle SEI Software Engineering Institute SEMP Systems Engineering Management Plan SLIM Software Life Cycle Management SOW Statement of Work SPM Systems Program Manager ST&E Security Test and Evaluation -TTAAD Test Analysis Approval Determination T&E Test and Evaluation TIM Technical Interchange Meetings TPM Technical Performance Measurement TRM Technical Reference Model -VVDD Version Description Document V&V Verification and Validation -WWBS Work Breakdown Structure

APPENDIX C-1 CONCEPT PROPOSAL


The Concept Proposal is the first document to be completed in the Systems Development Life Cycle (SDLC). The purpose is to highlight where strategic goals are not being met or where mission performance needs to be improved. The Program Manager/Sponsor prepares the Concept Proposal for component CIO's and Executive Review Board (ERB) approval. The Concept Proposal should be no more than 2-5 pages in length and cover the following content: 1.0 TITLE OF INVESTMENT

The Concept Proposal should have a name of a system/project. 1.1 Originator

Identify the originator organization and sponsor/manager names with signatures. If an Integrated Project Team (IPT) is used, provide all the names of the team. 1.2 Date

The Concept Proposal should be dated. 2.0 DESCRIPTION OF INVESTMENT

Provide a brief description of the investment. Is this a software development effort, a modernization of a current system, purchasing COTS software, integration effort, etc. This should be a descriptive account of how and why the system concept was envisioned. All system development efforts should be addressed, including early prototypes or pilots. 2.1 Mission/Goals Investment Supports

Cite the latest Component Strategic and Performance Plans. Explain how this investment will fit into those plans. Describe the magnitude and importance of the investment in relation to those goals. Discuss the timing of the investment. 2.2 Existing Structure

Evaluate where you are currently versus where you should be (i.e., baseline assessment). Analyze reasons for any performance gap. Relate to your current architecture and your IT Strategic Plans. 2.3 Benefits

Explain why expectations are not being met (i.e., what problems are occurring or anticipated). Identify the internal or external stakeholders or beneficiaries and what

needs to be done to satisfy them. Identify specific areas where performance does not meet expectations. Identify the high level performance goals of this investment. 2.4 Warranted Investment

Does this investment support core/priority mission functions that need to be performed by the government? Is there another private sector or governmental source that can better support the function? Does the investment support simplified or redesigned work processes that reduce costs, improve effectiveness, and make maximum use of commercial off the shelf (COTS) technology? 2.5 Funding

Provide a rough order of magnitude estimate for the total acquisition costs. Identify the anticipated source of the investment throughout its life cycle. CONCEPT PROPOSAL OUTLINE Cover Table of Contents 1.0 Title 1.0 1.2 of Originator Date of Investment Investment Supports Structure Investment Page Investment

2.0

Description 2.1 Mission/Goals 2.2 Existing 2.3 Benefits 2.4 Warranted 2.5 Funding

APPENDIX C-2 SYSTEM BOUNDARY DOCUMENT


The System Boundary Document (SBD) provides guidance on how to establish the boundaries of an information technology (IT) project. It establishes a formal agreement among the Component Executive Review Board (ERB) on the high level requirements, costs and schedule for an IT project. It records management decisions to mitigate and to accept a level of risk in the business, technological and project management environments. System development projects frequently experience cost overruns and schedule slippages due to a variety of reasons, including changing requirements and poor resource and schedule estimating. While requirements, schedule and resource changes will occur at times, these changes must be managed and controlled. Documented system boundaries are a tool for Component management to use to provide this control. The SBD is applicable to all Component IT projects. The SBD captures the business functions, goals and objectives that the IT project will satisfy. It also captures critical success factors, assumptions and constraints for the IT project as well as performance measures that provide criteria to judge whether the IT satisfied the business goals and objectives. The SBD shall be approved by the Component ERB. 1.0 OVERVIEW

The SBD records the essential properties of the envisioned system early in its development and provides guidance on its achievement. 1.1 Purpose

Identify the system / project to which this SBD applies and the strategic goals and missions it will support. 1.2 Background

The SBD is meant to help senior executives communicate between themselves and reach consensus on what they intend to achieve by pursuing this effort, and why. Provide information in this section on previous decisions or previous system development projects that are relevant to understanding the current project. 2.0 MISSION

The strategic planning information in this section should refer to component strategic planning documents, such as the Strategic and Performance Plans, the JMD/component IT Strategic Plan, and the Component Enterprise Architecture. 2.1 System Mission

Describe the mission for this system. Convey why this system is necessary for your component. Relate the system mission to the company and/or component mission.

The system mission will be discussed in terms of how it addresses the opportunities and deficiencies identified in the Concept Proposal. 2.2 Objectives

State the long term component business objectives expected to be achieved by using the system. 2.3 Goals

State any quantifiable targets that your component wishes to achieve, and the time frame for reaching them, as related to the proposed system. Goals must support one or more system objectives. 2.4 Critical Success Factors

Identify the critical factors for the system to be considered a success in achieving the business goals provided above. They are defined as conditions which must exist (or must be prevented). How will you know if this project is a success? 2.5 Performance Measures

For the business objectives above, describe how progress on their achievement will be measured and reported. 3.0 REQUIREMENTS STATEMENT

This section provides the preliminary documentation of functional and informational requirements based on the opportunities and deficiencies identified in the Concept Proposal. 3.1 Existing Methods and Procedures

This paragraph should provide a brief description of the current methods and procedures being employed to satisfy existing information requirements. Summarize the conclusions of any analysis that was performed on the ability of the existing system to satisfy the mission, objectives, goals, and critical success factors described above. Describe the products and services delivered to current customers. 3.2 Required Capabilities

3.2.1 Users' Functional Requirements Describe user requirements in functional terms. The description should be in narrative form and written from the users' perspective. Graphical representations may be used if it helps the user express the requirements and their interrelationships. When a requirement is the improvement of existing methods and procedures, state the extent of anticipated improvement and the relationship to previously stated opportunities and deficiencies. Make sure that all of the functions included in the system are identified

and that the functions are described in sufficient detail that an accurate estimate can be made of the resources required. 3.2.2 Users' Information Needs Describe user requirements in terms of the information needed to perform their functions. The description should be in narrative form and written from the users' perspective. A subject area diagram or high level entity relationship diagram may be used if this helps the user describe the kind of information required. 3.2.3 Sensitivity of Data Describe the requirements for protecting sensitive data. Sensitive information must be protected in accordance with the Computer Security Act of 1987 (Public Law 10025). 3.2.4 Network Requirements Describe all potential network support requirements to include number of users, projected volumes and types of data to be exchanged and the frequency of data exchange. These estimates should show the order of magnitude of support required at a level of accuracy and detail comparable to the functional requirements and information needs. 3.2.5 Interface Requirements Describe the proposed system's relationship with existing and other proposed systems. Include the purpose of the requirement for any interface. 3.2.6 Technical Framework Describe the potential impact to the component or infrastructure and the potential architectural and security implications by responding to the following: 3.2.6.1 Description Define the life-cycle stages for all planned systems in this project - circle all that apply. [New, Upgrade, Replacement] Who are the stakeholders for your project? [JMD, Components, Federal Agencies, Public, Other] 3.2.6.2 Resources for Development and O&M What resources will you need to administer servers? [In-house, contractor, JMD, other, don't know} What resources will you need to administer applications? [In-house, contractor, JMD, other, don't know} What resources will you need to administer configuration management? [In-house, contractor, JMD, other, don't know}

3.2.6.3 Network Connectivity What If kind of connectivity will you require? Circle all that apply. [Internal Networks, External Government, External Public] using one of the Internal Networks, please specify. [JCON, JCN, JMD SMO LAN, other, don't know]

3.2.6.4 Application Requirements Are you planning on using these JMD computing platforms? Circle all that apply. [OS/390, Unix, NT, Other, Not Sure] If not, are you planning on purchasing new hardware? Y/N Are you planning to do a pilot test system? Y/N Do you expect your architecture to be compatible with your existing hardware/software? Y/N/ or Don't Know What application software are you planning to use? Circle all that apply. [Oracle, JAVA, COTS, Other, Don't Know] If COTS, name the product(s) (example: PeopleSoft). 3.2.6.5 Email Will the system(s) have any components which rely on email services? Y/N 3.2.6.6 Integration Will your system(s) need to integrate with other Department systems? Y/N If yes, select all owners that apply. [JMD applications, Component applications, Federal Agency applications, Other] For each selected, please provide the specific name of the owner, the system name(s), and if the system currently exists or if it is planned. 3.2.6.7 Capacity Are you planning on using any resource intensive technologies? Circle all that apply. [Streaming Video, Video Teleconferencing, Other] If other, please specify ___________________________________. 3.2.6.8 Client Platforms Are you planning to make your application available from a JCON workstation? Y/N 3.2.6.9 Security What is highest sensitivity of data for your system(s)? [Public, Confidential, Secret] Have you identified security requirements for all systems? Y/N _______ If yes, please provide web link _________________________________________ If no, when will you do this? (insert checklist of phases)

If your system(s) will need to have access to the Internet, please select the type of access: (insert checklist of access types) 4.0 4.1 BUSINESS ASSUMPTIONS AND CONSTRAINTS Organizational Structure

Identify the potential impacts on the existing organizational structure. Identify constraints on the scope of change to the current organization. Discuss users, developers, maintainers, and any other organizational units affected by the system. Define all constraints that the new organizational structure may impose on the design and fielding of the system. Identify assumptions about who the users will be and where they will be physically located. Indicate any considerations for training, reassignment, etc. 4.2 Impact of Automation

Identify how automation impacts activities that are currently being performed. Discuss decisions and assumptions that partition functions between people and automation. This establishes guidance on the functions that require manual intervention and how automation should support them. Reference the rationale for these decisions, such as cost benefit or other reasons (union rules, re-training, computing limitations, etc.). 4.3 Legal

Discuss any legal considerations that may affect the system development or use, such as pending legislation. 4.4 Security

Discuss any security considerations that may affect the system development or use. 4.5 Facility

Discuss any facility considerations that may affect the system development or use. The cost of acquiring a facility is part of the life cycle cost of an IT system. 5.0 5.1 SYSTEM ASSUMPTIONS AND CONSTRAINTS Technology Impact Analysis

Summarize the conclusions from an analysis of the technological changes and state the preferred technological approach for the system. Reference any studies or actions taken to assure that the approach is feasible and can support the objective. 5.2 Acceptable Alternatives

State any explicit flexibility in the application of technological approaches that system designers should consider. Adaptation and growth of the system should be discussed. 5.3 System Upgrade

For a project which includes upgrade/improvement of existing systems, describe the following from an information technology perspective as they relate to the requirements stated in section 3, Requirements Statement: functional improvements (new capabilities); improvements of degree (upgrading existing capabilities); increased timeliness (decreased response or processing time); and the elimination or reduction of existing capabilities that are no longer needed. 6.0 6.1 PROGRAM MANAGEMENT ASSUMPTIONS AND CONSTRAINTS Organizational Support

Identify assumptions and constraints about the level of support to be provided by the organizations within and external to the project team that will participate in the development effort. Reference the PM Charter, which in turn will reference any memorandums of understanding between these participants. 6.2 Budget

Identify assumptions and constraints affecting the funds available for the project. For example, the effect variances from projected fee collections could have on the project in future fiscal years, the effect of proposed changes to the Federal budget, or the assumption that the system can be developed within the approved budget 6.3 Schedule

Identify externally imposed dates which affect the project. For example, the system will comply with legislation by xxxx date. Identify assumptions and constraints about tasks or events on the critical path of the project schedule. 6.4 Facility

Identify assumptions and constraints about physical facilities the project will have available for project use. Refer to agreements about sharing access to the operational space (i.e., third shift testing, office relocation, power, HVAC, etc.). 6.5 Acquisition

Identify any considerations for procurement activities for the system, including lead times and external/internal coordination. 6.6 Other Projects

Identify dependencies between this project and other development or modification projects which relate to this project. Refer to any Memoranda of Understanding between the projects' Project Managers or System Development Managers.

7.0

PROJECT COST, SCHEDULE, AND PERFORMANCE

This section establishes the component ERB commitment to the schedule, funding and cost, and performance metrics for the project (i.e., the Investment Baseline). 7.1 Schedule

Provide the major milestones and dates for the project. Specify the dates as a range if appropriate. 7.2 Approved Budget

State the approved budget for the life of the project by fiscal year. Indicate the different accounts (working capital fund, direct appropriation, debt collection fund, fee accounts, etc.) which are providing funds to the project. 7.3 Project Life Cycle Cost Estimate (LCCE)

State the estimated project LCCE by fiscal year broken down into cost categories. The major cost categories are: personnel, COTS components of the IT (hardware and software), infrastructure components, facilities, and supplies/materials. Personnel costs shall show component staff and contractors separately and be broken down into work breakdown structure (WBS) elements suitable for the investment. 7.4 Performance

State the measurable performance improvements anticipated from this project. Performance measures should be based on a stated priod of time so that progress over time can be demonstrated. Examples might include system response times for the public/users, system availability, number of criminals denied access to guns due to data in the system, application processing times, etc. 7.5 Project Risks

Discuss potential risks and the reasonableness/acceptability of the costs of these risks, their probability, their costs, and the mitigation strategies. Indicate if the cost figures have been adjusted to accommodate the risk calculations. 7.6 Return on Investment

Discuss if the quantitative and non-quantitative measures used to indicate that the investment will provide a justifiable return relative to the investment level required. Indicate what quantitative and nonquantitative measures of valuation have been used to determine the return-oninvestment (ROI) to the organization.

7.7

Affordability

Explain how the component organization will support this investment in light of other priorities. APPENDIX A - REFERENCES List each source of information used to prepare this SBD. Include documents (indicate parts that are applicable), meeting minutes, interviews, management reports, analysis of existing or future situations, and any other source as necessary to assure that statements within this document can be validated. APPENDIX B - ANALYTICAL MATRICES Describe the relationships between goals, objectives, Critical Success Factors, Performance Measures, functions, information needs and organizational units with graphics and supporting text. APPENDIX C - ENTITY RELATIONSHIP DIAGRAM Describe with graphics and supporting text the high level data created, used, stored or displayed by the system. APPENDIX D - FUNCTION HIERARCHY DIAGRAM Describe with graphics and supporting text the high level functions and processes supported by the system.

APPENDIX C-3 COST-BENEFIT ANALYSIS


The Cost Benefit Analysis (CBA) analyzes and evaluates, from a cost and benefit perspective, the candidate solutions to meet the stated need. It will also describe the feasible alternatives, all tangible and intangible benefits, and the results of the analysis. The feasible alternatives may be documented in more detail in a separate document, shown in Appendix C-4, Feasibility Study. This CBA will discuss which system costs are analyzed, present the total costs for all the years the analysis covers, and outline the comparison between the costs of each alternative and the tangible benefits of the same. Note: An urgent business need or external stakeholder pressure may dictate the use of an alternative development work pattern that may not identify, evaluate, or document alternative solutions. If no feasible alternatives are identified, the CBA methodology must be tailored to evaluate the costs and benefits of the proposed IT investment, without extensive analysis of alternative solutions. 1.0 OVERVIEW

This section describes and discusses the value added to the systems project by the CBA, and the justification for it as documented in various OMB publications. 1.1 Purpose

This section discusses the business need the CBA is trying to address, that is, the decision the company (or component) is trying to make. 1.2 Scope

This section states the scope of CBA. 1.3 Methodology

This section describes and discusses the CBA methodology employed and its relationship to the SDLC work pattern to be used by the project team. 2.0 ASSUMPTIONS, CONSTRAINTS, AND CONDITIONS

This section discusses assumptions, constraints, and conditions that may effect the results of this CBA. These assumptions, constraints, and conditions form the foundation on which the CBA is based; a change in any one of these could cause a change in benefits as well as costs.

2.1

Assumptions

The assumptions are explicit statements used to describe the present and future environment on which an analysis is based. The assumptions relative to the project system may include: All data (that is, cost figures, workload statistics, benefit values, etc.) used in this analysis are assumed to be accurate, reliable, and valid. The reslts of this analysis could be skewed by inaccurate or different data. The expected useful life of the system is 5-7 years. 2.2 Constraints

The constraints are factors, external to the program, which can limit the development of the application or the availability of performance data from the current system. The constraints relative to the project may include: Any technology considered must be able to meet the minimum business requirements of the company (or components). The programs and investments become cost ineffective if this is not the case. 2.3 Conditions

The conditions are unique factors in the operating environment that may influence system processes. The conditions relative to the project may include: The technology used must allow integration into the existing or proposed environment. Redundant investment if more than one production platform is used All alternatives must adhere to the Technical Reference Model. Alternatives implementing intranet or internet services will be in accordance with Departmental policy. 3.0 FEASIBLE ALTERNATIVES

This section identifies alternative solutions that will meet the needs and requirements outlined for the Program. The results of the corresponding Feasibility Study are used as a starting point into an analysis of costs and benefits for the Feasible Alternatives identified in that document. Each Feasible Alternative is analyzed as described in the subsequent sections.

This section discusses the Feasible Alternatives, which are technology solutions that meet the outlined high-level functional requirements. Feasible Alternatives could also be identified in a Feasibility Study (see Appendix C-4). Also describe, in a few sentences, the architecture on which the system will operate. This can be related to the local area network, wide area network, office automation, workstation, and e-mail architecture already in place at the locations of deployment. The analysis should address conformance with the Technical Reference Model (TRM) and all costs associated with upgrades or new development efforts. This section may need to be updated during the life of the system development project to include any changes or additions to the architecture. Note: An urgent business need or external stakeholder pressure may dictate the use of an iterative alternative development work pattern that may not identify, evaluate, or document alternative solutions. If no Feasible Alternatives are identified, mark this section as Not Applicable. 3.1 Alternative 1

This section briefly describes the alternative, its components, and how it will work. Describe how this alternative meets the high-level functional requirements. Explain how this alternative was chosen from a wide variety of alternatives, if a separate feasibility study is not developed. 3.2 Alternative n

Repeat Section 4 for as many alternatives as exist for the Feasibility Study. At a minimum every system investment has a minimum of two alternatives: fund on-going maintenance or status quo, and fund on-going maintenance plus enhancements. 4.0 COST ANALYSIS

The Cost Analysis presents the costs for design, development, installation, operation, maintenance and disposal, and consumables for the system to be developed. This profile is used to analyze the costs of the system for each year in its life cycle, so those costs can be weighed against the benefits derived from using it. In accordance with OMB Circular A-94, the system is fully cost-accounted, including all spending resources, whether appropriated or non-appropriated. 4.1 Cost Categories

Exhibit 4A, Cost-Related Terms, defines cost-related terms used in the Cost Analysis [the suggested line items may not be a complete list]:

Exhibit 4A: Cost-Related Terms Terms and Definitions Nonrecurring Costs: Nonrecurring costs are developmental and capital investment costs incurred only once during the analysis, design, development, and implementation of the system. Line Item

System development Prototypes Hardware purchase Module design, development, and integration System installation Personnel Operations and Maintenance Telecommunications Supplies Hardware and software upgrades, maintenance, and licenses Personnel Travel and training

Recurring Costs: Recurring costs are incurred more than once throughout the life of the system and generally include operation and maintenance costs.


[D]

4.2

Project Cost Analysis

The costs for system design, development, installation, operations, and maintenance are presented in Exhibit 4B, Cost Analysis. This section gives a brief explanation of the cost calculations for each year. This section explains that the costs for future years are discounted as per OMB A-94, Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs. The Year of OMB Circular real discount rate for the number of years, and the percentage rate from OMB A-94, are used to derive the discount factors used in the cost calculations. Discount factors are applied to the future years to provide an appropriate net present value (NPV) for the system costs. Because of inflation, a dollar today is worth less than in the future. It is important to recognize that dollar values of both benefits and costs associated with a project decrease over time because of inflation. Exhibit 4B: Cost Analysis Alternative 1 Year One Nonrecurring costs Recurring costs Alternative 2 Alternative 3

Year Two Nonrecurring costs Recurring costs Year Three Nonrecurring costs Recurring costs Total Costs
[D]

A detailed description of cost breakdowns should be developed to explain exactly how all cost calculations are presented. Discount rates should be applied where appropriate and documented as part of the explanation. Current OMB acceptable rates to be used can be found in a current version of the OMB Circular A-94. If necessary, a line by line cost accounting should be presented if the analysis is placed under any scrutiny. 5.0 BENEFIT ANALYSIS

This section analyzes the alternatives' individual ability to meet the goals of the project. 5.1 Key Benefit Terms

Exhibit 5A, Key Benefit Terms, lists and defines key terms used in this section. Definitions for other terms used in this section may be found in Section 11, Glossary and Acronyms .

Exhibit 5A: Key Benefit Terms Term Tangible Benefits Definition Benefits are expressed in dollars or in units. The result of tangible benefits may be: increased revenue, streamlined production, or saved time and money. For purposes of this analysis, tangible benefits are expressed in dollar values so that a valid comparison can be made with costs. The benefits for future years are discounted as per OMB A-94, Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs. Intangible Benefits Benefits are expressed in terms of improved mission performance, improved decision making, or more reliable or usable information. These benefits may be quantifiable, but cannot be expressed in dollar values. Many public goods are difficult to reliably and validly quantify in dollar units; however, intangible benefits are vital to understanding the total outcome of implementing a particular IT system.
[D]

5.2

Tangible Benefits

This section provides a detailed description of the tangible benefits. Because each alternative may not provide the same benefits, it is necessary to note which alternatives provide which benefits. This section also describes, in detail, the source(s) of data used to quantify the benefit for each alternative and presents a chart that depicts the calculations for that benefit. It is important to provide sufficient documentation of data sources and calculations so that readers can follow the logic of the quantification of benefits. Exhibits 5A, Tangible Benefit 1, and 5B, Annual Savings (Based on Average X Million Transactions per Annum), detail this information. Repeat this for each tangible benefit.

Exhibit 5A: Tangible Benefit 1 Measurement Current Value Alternative 1 Alternative n

Savings Exhibit 5B: Annual Savings (Based on Average X Million Transactions per Annum) Annual Transaction Times Current Alternative 1 Alternative n

Savings FTE Savings

FTE Savings

X FTEs

Y FTEs

Dollar Savings (Based on FTE Salary of $X per Annum)

Dollar Savings In a paragraph or two following the benefit description, each calculation should be explained and data sources, such as the current Federal general schedule, should be cited for any data used. Each benefit should be calculated out for the number of projected years for each alternative. Benefits and costs for each alternative should be calculated for the same number of years to provide an accurate cost benefit comparison. 5.3 Summary of Tangible Benefits

Exhibit 5C, Tangible Benefits, summarizes the quantifiable benefit value for each alternative.

Exhibit 5C: Tangible Benefits Alternative 1 Benefit 1 Benefit n Total Benefit Exhibit 5D, Summary of Project Tangible Benefits: Expected Return, summarizes the tangible benefits described above. Exhibit 5E, Intangible Benefits Alternative 1, shows the expected return from tangible benefits for three years, allowing for an accurate comparison with the three year costs in Section 4, Feasible Alternatives. Exhibit 5F, Intangible Benefits Alternative n, illustrates a comparison of the tangible benefits for each alternative as well as each technology solution as part of each alternative. Exhibit 5D: Summary of Project Tangible Benefits: Expected Return Tangible Benefit 1 FY99 Alternative 1 Alternative n Tangible Benefit n FY99 Alternative 1 Alternative n Total Benefits FY99 Alternative 1 Alternative n If any of the alternatives does not provide one of the benefits, be sure to indicate this by placing a zero in the box and providing a brief explanation of why. 5.4 Intangible Benefits FY00 FY01 Total FY00 FY01 Total FY00 FY01 Total Alternative n

Although no quantifiable dollar value has been placed on these benefits, if they need to be related to value in some way if they influence the decision. The intangible

benefits for each alternative may either be the same or different. It is important to include all intangible benefits. Exhibit 5E: Intangible Benefits Alternative 1 Intangible Benefits Intangible Benefit 1 Intangible Benefit n Exhibit 5F: Intangible Benefits Alternative n Intangible Benefits Intangible Benefit 1 Intangible Benefit n For each alternative, include a table in the same format as the above exhibits. 5.5 Summary of Intangible Benefits Description Description

Exhibit 5G, Summary of Intangible Benefits: Expected Return, summarizes the values of intangible benefits. Exhibit 5G: Summary of Intangible Benefits-- Expected Return Intangible Benefits Intangible Benefit 1 Intangible Benefit n This table should be used to indicate if an alternative provides an intangible benefit for comparison purposes. A checkmark can be placed in each alternative box that does provide the particular benefit. It should be noted that if a tangible benefit can be valued in unit terms but cannot be valued in dollars, the unit valuation should be presented in some manner and the alternatives should be ranked for that intangible alternative. 6.0 COMPARISON OF COSTS AND BENEFITS FOR PROJECT Alternative 1 Alternative n

Once you have determined the discounted values of costs and benefits, you need to compare each alternative. Several tools commonly used to rank projects and compare alternatives are Net Present Value (NPV), Benefits cost Ratio (BCR), Return on Investment (ROI), Discounted Payback Period (DPP), and Internal Rate of Return (IRR).

This section compares the discounted costs and benefits for the project. The first part of the comparison examines the tangible benefits and the second part examines intangible benefits. The purpose of this comparison is to identify if tangible and intangible benefits outweigh the total cost of the system. 6.1 Results of the Comparison for Project-- Tangible Benefits

Exhibit 6A, Project Cost to Tangible Benefit Comparison, compares the costs and tangible benefits of the Project. Identify which comparison tool was used. Exhibit 6A: Project Cost to Tangible Benefit Comparison Alternative 1 Total Tangible Benefits Total Costs Difference Between Costs and Benefits 6.2 Results of the Comparison for Project-- Intangible Benefits Exhibit 6B, Intangible Benefit Comparison-- Expected Return, compares the intangible benefits of the Project. Exhibit 6B: Intangible Benefit Comparison-- Expected Return Alternative 1 Intangible Benefits Alternative n Alternative n

7.0

SENSITIVITY ANALYSIS

A sensitivity analysis assesses the potential effect on inputs (costs) and outcomes (benefits) that depends on the relative magnitude of change in certain factors or assumptions. A change in any factor (that is, area of uncertainty) can necessitate a revision to the cost-benefit projections or can influence system performance outcomes. This section examines key sources of uncertainty in the operational environment of the Project and what it is going to do. This may also rank the alternatives and see how sensitive they are to basic assumptions or externalities (political, social, and environmental). After costs and benefits are determined for each alternative, the alternatives are ranked and a sensitivity analysis is performed. 7.1 Key Sources of Uncertainty

Exhibit 7A, Sensitivity Results, lists the key factors that have implications for the Project. Projected costs and benefits could change depending on the extent of change in these factors. Exhibit 7A: Sensitivity Results Key Sources of Uncertainty Extent of Impact Nature of Impact Implications

7.2

Sensitivity Results

Each of the key sources of uncertainty could have an effect on the benefits and costs of the project. The effect of each source of uncertainty is discussed in the subsequent section. 8.0 RESULTS OF THE ANALYSIS

The project CBA results are based on the work described in the previous sections. This work assesses the costs and benefits, both tangible and intangible, of the project and what it will do. The sensitivity of its costs and benefits to key sources of uncertainty are described in Section 8, Sensitivity Analysis. This section should list what the system will provide the agency. It should also discuss how well each alternative will achieve the goals of the system in context to the relative cost of that alternative. No specific recommendation should be made. Any CBA should be used by decision makers as a tool in conjunction with other studies and factors to determine the most appropriate investment choice for the agency to achieve its mission. Appendix A: REFERENCES AND DOCUMENTATION Documents used to obtain information for this CBA, including project alternatives, costs, benefits, uncertainties, and information regarding cost-benefit methodologies, are listed in the subsequent sections. Appendix B: GLOSSARY AND ACRONYMS The definitions and acronyms presented in this section are specific to this analysis. Although these terms and acronyms may have other meanings, those included in the subsequent sections are used in this analysis.

APPENDIX C-4 FEASIBILITY STUDY


The Feasibility Study describes the information management or business requirement or opportunity in clear, technology-independent terms on which all affected organizations can agree. An information management requirement or opportunity can be prompted by such factors as new legislation, changes to regulations, or the growth of a program beyond the support capability of existing systems. The Feasibility Study provides an overview of a complex business requirement or opportunity and determines if feasible solutions exist before full life-cycle resources are committed. The requirement or opportunity is assessed in terms of technical, economic, and operational feasibility. The study contains decision criteria, comparisons of general solution possibilities, and a proposed program (solution). The study is conducted any time a broad analysis is desired before commitment of development resources. Before conducting the study, the following key decisions should be addressed: What are the specific requirement or opportunity and the responsible organization(s)? Provide an initial recognition of the requirement or opportunity and establish the broad objectives of the remainder of the life cycle. This decision addresses characteristics of the requirement or opportunity, such as programmatic or other causes and symptoms of the requirement or opportunity, affected organizations, types of information needed, high-level information processing capabilities, an initial perception of the ability of current systems and procedures to address the requirement or opportunity, and the timeframe(s) within which the requirement or opportunity must be resolved. What new information needs are associated with the problem? Provide a context for future life-cycle decisions by determining if a new information need exists to support a solution. Describe the scope of the need in terms of missions and organizations affected. How broad a scope should the solution cover? Provide an overall context within which potential solutions to the requirement are defined, helping to ensure that solutions focus on the major priority areas. The scope is determined in terms of the organization(s), such as agency offices, congressional organizations, or executive branch agencies; the

pertinent portions of the missions or programmatic functions of each organization; and the potential relationship of the current requirement and efforts to formulate its solution to other previously identified requirements and ongoing related efforts. A CBA is prepared as a companion document with the feasibility study. The CBA is the document that provides managers with adequate cost and benefit information to analyze and evaluate alternative approaches. It provides information for management to make decisions to initiate a proposed program-- to continue or discontinue the development, acquisition, or modification of information systems or resources. A sample outline of a feasibility study is provided. 1.0 1.1 INTRODUCTION Origin of Request

This section identifies the originator and describes the circumstances that precipitated this project request. Provide the objectives of the Feasibility Study in clear, measurable terms. 1.2 Explanation of Requirement

This section describes the information management requirement in programmatic, technology-independent terms. It should state the specific deviations from the desired situation and the source and/or cause of the new requirement or opportunity. It describes any new information need(s) associated with the requirement or opportunity. The section should identify the cause(s) and effect(s) of the requirement or opportunity and validate the description of the requirement or opportunity with all affected organizations. 1.3 Organization Information

This section identifies the organization(s) mentioned in Section 1.1, Origin of Request, and the pertinent current procedures, information, and systems of those organizations. Provide descriptions of the relevant procedures and systems as appropriate. The section should specify all organizational units involved, list the organizational unit(s) at all levels of the Service and external organizations that relate to the requirement or opportunity, and describe the pertinent mission area(s) and programmatic functions of each. 1.4 Glossary

Provide a glossary of all terms and abbreviations used in the Feasibility Study. If the glossary is several pages in length, include it as an appendix to the study.

2.0

EVALUATION CRITERIA

This section states the criteria by which the alternatives will be evaluated. The criteria should make a distinction among characteristics that must be present in the system for it to be acceptable. 3.0 ALTERNATIVE DESCRIPTIONS

This section provides a description for each alternative proposed to handle the defined problem. It should describe the resources required, associated risk, system architecture, technology used, and the manual process flow for each alternative. The section should state at least two alternatives for each feasibility study-- one being the alternative of doing nothing, if appropriate-- and predict the anticipated benefits of each alternative and the likely effects of not taking action on the alternative. The section should also state benefits in terms of technical, operational, and economic feasibility. 3.1 Alternative Model

This section presents a high-level data flow diagram and logical data model, if possible, from current physical processes and data for the proposed system alternative. 3.2 Description

This section states the required and desirable features, and provides a concise narrative of the effects of implementing this alternative. 4.0 ALTERNATIVE EVALUATION

This section provides a systematic comparison of the alternatives and documents potential problems resulting from the implementation of each. 5.0 RECOMMENDATION

This section provides a narrative that supports the recommended alternative program. The section should select the most advantageous program to implement the required functional capabilities based on the functional and technical concepts that satisfy the need. The information system should not be obtained at the price of inappropriate development risk or the loss of efficiency, capability, or capacity in the supported function.

APPENDIX C-5 RISK MANAGEMENT PLAN


1.0 1.1 INTRODUCTION Purpose

In this section, present a clear, concise statement of the purpose of the Risk Management (RM) plan. Include the name and code name of the project, the name(s) of the associated system(s), and the identity of the organization that is responsible for writing and maintaining the RM plan. 1.2 Background

This section briefly describes the history of the project and the environment in which the project will operate. (This information may be included through reference to other project documents.) Include the following information: 1.3 Identification of other systems with which the subject system interfaces Contractor support for development and maintenance System architecture, operating system, and application languages Development methodology and tools used for the project Scope

This section presents a definitive statement of the scope of the RM planning contained in this document, including the limits and constraints of the RM plan. 1.4 Policy

Include in this section policy decisions that affect how RM is conducted. This section also lists documents that are referenced to support the RM process. Include any project or standards documents that are referenced in the body of the plan or that have been used in the development of the document. 1.5 Approach

In this section, describe the project's approach to risk management. Include the elements of identification, analysis, planning, tracking, control, and communications. Discuss the project's risk mitigation strategies in general and detail specific strategies that have significant impact across the project (e.g., parallel development, prototyping). 2.0 RISK IDENTIFICATION LIST

The tracking of risks in a risk identification list is a critical facet of successful system development management. The risk identification list is used from the beginning of

the project and is a major source of input for the risk assessment activity. Following are examples of categories that may be a source of risk for a system development: The complexity, difficulty, feasibility, novelty, verifiability, and volatility of the system requirements The correctness, integrity, maintainability, performance, reliability, security, testability, and usability of the SDLC deliverables The developmental model, formality, manageability, measurability, quality, and traceability of the processes used to satisfy the customer requirements The communication, cooperation, domain knowledge, experience, technical knowledge, and training of the personnel associated with technical and support work on the project The budget, external constraints, politics, resources, and schedule of the external system environment The capacity, documentation, familiarity, robustness, tool support, and usability of the methods, tools, and supporting equipment that will be used in the system development Once the risks have been identified, document them in this section as the risk identification list. Steps for developing the risk identification list are the following: Number each risk using sequential numbers or other identifiers. Identify the SDLC document in which the risk is applicable. For instance, if you are working on the CM plan and discover a CM risk, identify the CM plan as the related document. Describe the risk in enough detail that a third party who is unfamiliar with the project can understand the content and nature of the risk. Use the risk identification list throughout the life-cycle phases to ensure that all risks are properly documented. 3.0 RISK ASSESSMENT

The project management plan and the risk identification list are inputs to the risk assessment. Categorize the risks as internal or external risks. Internal risks are those that you can control. External risks are events over which you have no direct control. Examples of internal risks are project assumptions that may be invalid and organizational risks. Examples of external risks are Government regulations and supplier performance.

Evaluate the identified risks in terms of probability and impact. For each risk item, determine the probability that this will occur and the resulting impact if it does occur. Use an evaluation tool to score each risk. For example, a simplistic model could be: Assign numerical scores to risk probability (1=low, 2=moderate, 3=high) and severity of impact (1=low, 2=moderate, 3=high). A risk score would be the product of the two scores. Management attention would be then be focused on those risks with a score of 9, followed by 6, etc. 4.0 RISK ACTION PLAN

Review the risk items with high rankings from Section 3 and determine if the significant risks will be accepted, transferred, or mitigated. With the acceptance approach, no effort is made to avoid the risk item. This approach is usually employed because the risk items are the result of external factors over which you have no direct control. Two types of action are usually taken under the acceptance approach: contingency planning and no action. You can plan contingencies in case the risk does occur. Thus, the project team has a backup plan to minimize the affects of the risk event. Or you can take no action and accept responsibility if the risk event does indeed occur. With the transfer approach, the objective is to reduce risk by transferring it to another entity that can better bear it. Two methods of transferring risk are the use of insurance and the alignment of responsibility and authority. With the mitigation approach, emphasis is on actually avoiding, preventing, or reducing the risk. Some risks can be avoided by reducing the number of requirements or defining them more completely. For example, careful definition of the scope of a project in a SOW can avoid the possible consequence of "scope creep," or indecisive, protracted, and uncertain scope objectives. In this section, identify and describe in detail the actions that will be taken to transfer or mitigate risks that are prioritized as high in Section 3. These actions should ultimately result in the reduction of project risk and should directly affect the project plan and the metrics used for the project. Activities for reducing the effects of risk will require effort, resources, and time just like other project activities. The actions need to be incorporated into the budget, schedule, and other project plan components. Update the project plan components to ensure the planning and execution of risk action activities. Also, refer to contingency plan documents for any contingency plans that have been identified with the risk acceptance approach. Risk action plans will be used to direct all risk mitigation activities. The RM plan will need to be monitored and updated throughout the life-cycle phases.

APPENDIX C-6 ACQUISITION PLAN


The Acquisition Plan is a document that shows how all government human resources, hardware, software, and telecommunications capabilities, along with contractor support services, are acquired during the life of the project. The Acquisition Plan helps ensure that needed resources can be obtained and are available at the time they are needed. The plan includes a milestone schedule that lists activities for completion and deliverables to be produced with appropriate estimated completion dates. Follow the applicable elements of the outline to complete the Acquisition Plan. The information in the plan is as follows: Provides management with adequate information for making decisions concerning procurement of government human resources and services, contractor services procurement, including ensuring the availability of funding Provides technical evaluation personnel with adequate information for analyzing and evaluating vendor proposals a Ensures that vendors will have adequate information for preparing bids Provides the source selection official with adequate information on which to base selection The following should be considered when submitting a request for hardware, software, and/or related services: Resources are consistent with applicable laws, regulations, policy/procedural guidance from central management agencies, Congress, and senior department management. Acquisitions are consistent with departmental objectives and initiatives as defined in the plans. Resources are obtained only in direct support of the missions and programs of the acquiring office/organization.

Acquisitions are not redundant or duplicative efforts resulting in wasted money, time, and resources. Resources represent the most efficient and cost-effective means of providing automated support.

The Acquisition Plan typically has its own mini-life cycle of pre-solicitation, solicitation and award, and post award. The life-cycle model varies according to the system development effort; this means that the activities in each differ significantly. The model Acquisition Plan includes a milestone schedule, with estimated completion dates, for the following activities: Requirements Analysis Analysis of Alternatives Statement of Work (SOW) Procurement of government human resources and services Procurement plan Acquisition of contractor services Legal opinion on statement of work Solicitation of services Technical evaluation report Source selection recommendation Contract award Adjustment of funds Contract performance

The Acquisition Plan becomes critical after the functional requirements document has been approved. Several acquisitions may be needed to procure an entire system and will be a continuous part of the cycle. The Acquisition Plan is continuously updated, and the involvement of users work closely with the Project Manager. The Acquisition personnel becomes increasingly important as the project progresses. The Project Manager works directly with the Acquisition personnel to ensure the timely award of the needed resources. The Acquisition Plan is developed as required by the Federal Acquisition Regulation (FAR) 7.103 using the following format. 1.0 1.1 BACKGROUND AND OBJECTIVES Statement of Need

This section introduces the plan with a brief statement of need. This section should discuss feasible acquisition alternatives and any related in-house efforts. 1.2 Applicable Conditions

This section states all the significant conditions affecting the acquisition, including requirements for compatibility with existing or future systems or programs and any known cost, schedule, and capability or performance constraints. 1.3 Cost

This section sets forth the established cost goals for the acquisition and the rationale supporting them, and discusses related cost concepts to be employed, as indicated in the subsequent sections. In each subsection, discuss the type of funding that will be required. 1.3.1 Life-Cycle Cost

This section discusses how the life-cycle cost will be considered. If life-cycle cost is not used, this section explains why. This section also discusses, if appropriate, the cost model used to develop the life-cycle cost estimates. Life-cycle cost is the total cost to the Government of acquiring, operating, supporting, and disposing of the items being acquired. 1.3.2 Design-to-Cost

This section discusses the design-to-cost objectives and the underlying assumptions, including the rationale for quantity, learning curve, and economic adjustment factors. It describes how objectives are to be applied, tracked, and enforced, and indicates the specific related solicitation and contractual requirements to be imposed. Design-tocost is a concept that establishes cost elements as management goals to achieve the best balance between life-cycle cost, acceptable performance, and schedule. Under this concept, cost is a design constraint during the design and development phases, and a management discipline throughout the acquisition and operation of the system or equipment. 1.3.3 Application of Should-Cost

This section discusses the application of should-cost analysis to the acquisition, as per FAR 15.810. 1.4 Capability or Performance

This section specifies the required capabilities or performance characteristics of the products being acquired, and states how they are related to the need. 1.5 Delivery or Performance-Period Requirements

This section describes the basis for establishing delivery or performance-period requirements, and explains and provides reasons for any urgency resulting in concurrency of development or justifying other than full and open competition. 1.6 Trade-Offs

This section discusses the expected consequences of trade-offs among the various cost, capability, performance, and schedule goals. 1.7 Risks

This section discusses the technical, cost, and schedule risks and describes what efforts are planned or underway to reduce the risk and the consequences of failure to achieve goals. The effects on cost and schedule risks imposed by concurrency of development and production should also be discussed, if applicable. 1.8 Acquisition Streamlining

This section is included if the acquisition has been designated as part of a program subject to acquisition streamlining. It discusses plans and procedures to encourage industry participation via draft solicitations, pre-solicitation conferences, and other means of stimulating industry involvement during design and development. It also discusses plans and procedures for selecting and tailoring only the necessary and costeffective requirements, and it states the time frame for identifying which specifications and standards, that had originally been provided for guidance only, are scheduled to become mandatory. 2.0 2.1 PLAN OF ACTION Sources

This section indicates the prospective sources of products than can meet the need. It considers the required sources, including consideration of small businesses, small disadvantages businesses, and women-owned small business concerns. It addresses the results of market research and analysis and indicates their effect on the various elements of the plan. 2.2 Competition

This section describes how competition will be sought, promoted, and sustained throughout the course of the acquisition. If the acquisition will be other than a full and open competition, this section cites the authority for the deviation, discusses the basis for the application of the authority, identifies the sources, and discusses why full and open competition cannot be obtained. This section also identifies the major components of the subsystems, and describes how competition will be sought, promoted, and sustained for these components. This section also discusses how competition will be sought, promoted, and sustained for spares and repair parts. This includes an identification of the key logistic milestones, such as technical data delivery schedules and acquisition method coding conferences, which affect competition. Finally, if subcontract competition is feasible and desirable, this section describes how such subcontract competition will e sought, promoted, and sustained throughout the course of the acquisition, and how any known barriers to subcontract competition will be overcome.

2.3

Source Selection Procedures

This section discusses the source selection procedures for the acquisition, including the timing for submission and evaluation of proposals, and the relationship of evaluation factors to the attainment of the acquisition objectives. 2.4 Contracting Considerations

This section discusses, for each contract contemplated, the contract type selection; the use of multi-year contracting; options; or other special contracting methods; any special clauses, special solicitation provisions, FAR deviations required; if sealed bidding or negotiation will be used, and why; if equipment will be acquired by lease or purchase, and why; and any other contracting considerations. 2.5 Budgeting and Funding

This section describes how budget estimates were derived, and discusses the schedule for obtaining adequate funds at the time they are required. 2.6 Product Descriptions

This section explains, in accordance with FAR Part 11, the choice of product description types to be used in the acquisition. 2.7 Priorities, Allocations, and Allotments

This section specifies the method for obtaining and using priorities, allocations, and allotments, and the reasons for them, in cases where the urgency of the requirement dictates a short delivery or performance schedule. 2.8 Contractor Versus Government Performance

This section addresses the consideration given to OMB Circular A-76. Circular A-76 indicates that it is the policy of the Government to rely generally on private commercial sources for supplies and services, when certain criteria are met, while recognizing that some functions are inherently governmental and must be performed by Government personnel. It also gives consideration to relative cost when deciding between Government performance and performance under contract. 2.9 Inherently Governmental Functions

This section addresses the considerations given to Office of Federal Procurement Policy Letter 92-1. Inherently governmental functions are those functions that, as a matter of policy, are so intimately related to the public interest as to mandate performance by Government employees. 2.10 Management Information Requirements

This section discusses the management systems that will be used by the Government to monitor the contractor's effort. 2.11 Make or Buy

This section discusses considerations given to make-or-buy programs, as per FAR 15.7. 2.12 Test and Evaluation

This section describes the test program of the contractor and the Government. It describes the test program for each major phase of a major system acquisition. If concurrent development/deployment is planned, this section discusses the extent of testing to be accomplished before production release. 2.13 Logistics Considerations

This section describes the assumptions determining contractor or agency support, initially and over the life of the acquisition, including contractor or agency maintenance and servicing and distribution of commercial items. It also describes the reliability, maintainability, and quality assurance requirements, including any planned use of warranties. It also describes the requirements for contractor data and data rights, their estimated cost, and the use to be made of the data. And, it describes standardization, including the necessity to designate technical equipment as "standard" so that future purchases of the equipment can be made from the same manufacturing source. 2.14 Government-Furnished Property

This section indicates the property to be furnished to contractors, including material and facilities, and discusses associated considerations, such as availability, or the schedule for its acquisition. 2.15 Government-Furnished Information

This section discusses any Government information, such as manuals, drawings, and test data, to be provided to prospective offerors and contractors. 2.16 Environmental and Energy Conservation Objectives

This section discusses all applicable environmental and energy conservation issues associated with the acquisition, the applicability of an environmental assessment or environmental impact statement, the proposed resolution of environmental issues, any environmentally related requirements to be included in solicitations and contracts. 2.17 Security Considerations

This section discusses, for acquisitions dealing with security-related matters, how adequate security will be established, maintained, and monitored.

2.18

Other Considerations

This section discusses, as applicable, other considerations, such as standardization concepts, the industrial readiness program, the Defense Production Act, the Occupational Safety and Health Act, foreign sales implications, and any other matters germane to the plan and not covered elsewhere. 2.19 Milestones for the Acquisition Cycle

This section addresses the following steps, and any others as appropriate: Acquisition Plan approval; SOW's; specifications; data requirements; completion of acquisition package preparation; purchase requests; justification and approval for other than full and open competition; issuance of synopsis; issuance of solicitation; evaluation of proposals, audits, and field reports; beginning and completion of negotiations; contract preparation, review, and clearance; and contract award. 2.20 Acquisition Plan Contacts

This section lists the individuals who participated in preparing the Acquisition Plan, and provides contact information for each.

APPENDIX C-7 CONFIGURATION MANAGEMENT PLAN


1.0 INTRODUCTION

Provide a brief statement that introduces the Configuration Management (CM) plan and describes, in general terms, its use in managing the configuration of the specific project, or system. Configuration Management is a uniform practice for managing system software, hardware and documetnation changes throughout the development project. 1.1 Purpose

Describe why this CM plan was created, what it accomplishes, and how it is used. 1.2 Scope

Define the scope of CM planning. Identify items that will be placed under configuration control. 1.3 System Description

Briefly describe the system, its history, and the environment in which the project operates (mainframe, client/server, or stand alone). Describe the system architecture, operating system, and application languages. Identify other legacy or new systems with which this system interfaces. List the number of sites that are using the system. 1.4 Definitions

Define the terms that appear in the CM plan. 1.5 Reference Documents

List the documents that are referenced to support the CM process including any project or standards documents referenced in the body of the CM plan. 2.0 ORGANIZATION

Identify the organization in which CM resides and all organization units that participate in the project. Define the functional roles of these organizational units within the project structure. Describe any Internal Review Boards and Configuration Control Boards that will be established for the project. For each board, discuss the members who will participate (and their functional representatives), the Chair, the Secretariat, and the responsibilities of the board and of each member to the board. 2.1 CM Activities

Identify all CM functions required to manage the configuration of the system.

2.2

CM Responsibilities

List CM responsibilities in supporting this project. 3.0 CONFIGURATION IDENTIFICATION

Explain that Configuration Identification is the basis on which the configuration items (CIs) are defined and verified; CIs and documents are labeled; changes are managed; and accountability is maintained. Define the automated tools that will be used to track and control the configuration baselines. Describe the methods for controlling, tracking, implementing and reporting changes. 3.1 Configuration Item Identification

Identify the CIs to be controlled and specify a means of identifying changes to the CIs and related baselines. At a minimum, the system itself, all COTS software and hardware for the system (or application) to function, and any support software developed in-house or by contractor should be a CI. 3.2 Identification Conventions

Describe the identification (numbering) criteria for the software and hardware structure, and for each document or document set. 3.3 Naming Conventions

Provide details of the file naming convention to be used on the project and how file configuration integrity will be maintained. 3.4 Labels

Describe the requirements for labeling media and application software. 3.5 Configuration Baseline Management

Describe what baselines are to be established. Explain when and how they will be defined and controlled. A configuration baseline document includes both SDLC documents and any other user support documents subject to change when the project changes. The standard baselines are functional baseline (FBL), which describes the system functional characteristics; allocated baseline (ABL), which describes the design of the functional and interface characteristics, and product baseline (PBL), which consists of completed and accepted system components and documentation that identifies these products. 4.0 CONFIGURATION CONTROL

Explain that configuration change management is a process for managing configuration changes and variances in configurations. Configuration control is the systematic proposal, justification, evaluation, coordination, approval and implementation of changes after formal establishment of a configuration baseline.

4.1

Change Management

Define the process for controlling changes to the system baselines and for tracking the implementation of those changes. Usually a system change request (SCR) is used to provide information concerning the need to change a baseline system or system component (hardware, software, or documentation). 4.2 Interface Management

Identify the interfaces to be managed and describe the procedures for identification of interface requirements, establishment of interface agreements, and participation in any Interface Control Working Groups. 5.0 CONFIGURATION STATUS ACCOUNTING

Explain that Configuration Status Accounting (CSA) is the process to record, store, maintain, coordinate and report the status of CIs throughout the system life. All software and related documentation should be carefully tracked from initial development to request for change, through the approval or disapproval of changes, to the implementation of changes. 6.0 CONFIGURATION AUDITS

Describe how peer review audits and formal audits will be accomplished for the purpose of assessing compliance with the CM Plan. These could include baseline audit, functional configuration audit, physical configuration audit, software and hardware physical configuration audit, and subcontractor configuration audits. . 7.0 REVIEWS

Describe how the technical reviews relate to the establishment of baselines and explain the role of CM in these reviews. 8.0 CM PLAN MAINTENANCE

Describe the activities and responsibilities necessary to ensure continued CM planning during the life cycle of the project. State who is responsible for monitoring the CM plan. Describe how frequently updates are to be performed; how changes to the CM plan are to be evaluated and approved; and how changes to the CM plan are to be made and communicated. 9.0 9.1 DATA MANAGEMENT Libraries.

Identify the libraries and the media under control, the requirements for the control of documentation, and how access control is to be managed. 9.2 Automated Tools

Describe any automated tools used. 9.3 Version Control

Describe the processes in place to control the amount and number of versions documented by this CM Plan. 9.4 Work Space Management

Describe the processes used for automated software source case control tools. 9.5 Build Management

Describe the controls in place to manage the building of executable code. 9.6 Documentation Management

Describe the processes in place for documentation management and who is responsible for them. 10.0 SUBCONTRACTOR CONTROL

Subcontractors will be required to meet the CM requirements that have been levied by the plan on the contractor. The requirements for the subcontractor may be modified to fit the scope and magnitude of the subcontract task. A complete CM plan should be required of the subcontractor if an extensive contract is envisioned. If the contract is minor in content a plan should not be requested. However, provisions must be made for continuous communication and monitoring of CM activities, review and disposition of subcontractor supplied documents and subsequent changes, and the final audits. Subcontractors will provide status accounting reports reflecting the development of software, hardware, and COTS Configuration Item data.

APPENDIX C-8 QUALITY ASSURANCE PLAN


The purpose of the Quality Assurance (QA) plan is to ensure that delivered products satisfy contractual agreements, meet or exceed quality standards, and comply with approved SDLC processes. The delivered QA plan will include a Program Level plan and Project plan(s). The Program Level plan describes all potential activities that QA could apply to a program's tasks as they proceed through the life cycle. The Project Level QA plan(s) will describe the actual QA activities that will be integrated with the project plan and schedule. The level of detail contained in the Project Level QA plan(s) should be consistent with the complexity, size, intended use, mission criticality, and cost of failure of the system development effort. Only deviations from the Program Level QA plan and special characteristics appropriate to the task are required for completion of the Project Level QA plan(s). The suggested format, Quality Assurance Plan Outline, is applicable to both Program Level and Project Level plans. 1.0 1.1 GENERAL Purpose

Establish the requirements for QA applicable to all portions of the system's development effort. QA activities will require effort, resources, and time just like other project activities. 1.2 Reference(s)

List documents used in QA reviews with complete citations (title; version number, if any; originating organization; date; etc.). References should include all standards that will apply to the QA function. 1.3 Objective

Discuss the system QA objectives of the program as established by the Project Manger. Describe the benefits that will be realized by conforming to quality requirements and the contributions that QA makes to the success of the program. 1.4 Glossary

Provide definitions for terms and acronyms used within the QA plan. 2.0 ORGANIZATION

Provide an operational organizational chart developed for the program from a QA perspective. Describe the tasks in terms of QA activities associated with the project.

Identify responsibilities for project tasks. Describe the procedures that link the QA process with the Systems Development, CM, and Test and Evaluation (T & E) functions. 2.1 Customer

Describe the partnership activities performed by QA in support of contract performance. 2.2 System Development

Describe the objectives of QA in monitoring formal development to ensure that the concepts and standards applied by QA are implemented at the project and program levels. Direct detail level work in support of tasks toward the individual processes within the SDLC. 2.3 Test and Evaluation

Describe the role of QA in ensuring that project requirements are satisfied and that formal testing is completed in accordance with plans, standards, and procedures. Discuss reviews of test plans, test specifications, test procedures, and test reports. 2.4 Configuration Management

Describe the role of QA in ensuring that CM activities are performed in accordance with the CM plans, standards, and procedures. Discuss reviews to verify that baseline control, configuration identification, configuration control, configuration status accounting, and configuration authentication have been accomplished. 2.5 QA Roles and Responsibilities

Describe the organizational and functional alignment of the QA staff. Describe the roles and responsibilities at the management, team leader, and specialist levels. 3.0 PROCESSES

Provide an overview of the processes that QA uses to ensure that processes and products associated with hardware, software, and documentation are monitored, sampled, and audited to ensure compliance with methodology, policy, and standards. 3.1 General

Describe QA's role in performing reviews and audits associated with deliverables and with collections of deliverables making up an SDLC phase. 3.2 Peer Review

Commit to QA participation in the peer review process to identify, document, measure, and eliminate defects in a work product.

3.3

Process Review

Describe audit and assessment reviews that ensure that appropriate steps are taken to carry out activities specified by the SDLC. Describe methods by which QA monitors processes by comparing the actual steps carried out with those in the documented procedures. Discuss QA's responsibility to provide review data to management to provide an indication of the quality and status of the project. 4.0 PROBLEM REPORTING AND CORRECTIVE ACTION

Describe the role of QA in identifying problems and recommending corrective actions. Identify the requirement for tasks to include QA in problem reporting and corrective action functions. Discuss the procedures and formats for the preparation, tracking, and management involvement in the use of Quality Action Reports (QARs) used in the program. 4.1 Quality Action Reports

Describe preparation of QARs to document anomalies, violations of program standards, or potential problems as identified during any point in the SDLC. 4.2 QA Escalation Procedure

Describe the QA escalation procedures that bring high-risk or long-standing, unresolved noncompliance tracking issues to senior management's attention. 4.3 QA Report Formats

Describe the report formats that formally document and transmit information from QA audits and/or assessment reviews. 5.0 TOOLS, TECHNIQUES, AND METHODOLOGIES

Identify the tools, techniques, and methodologies used to support the QA function. Discuss the application of these items to QA's function in appraisal, preventive (identification of nonconformance), and corrective actions that contribute to the success of the project. 5.1 SDLC

Describe QA's use of the SDLC, the supporting policies, and accepted standards in management of internal activities. Also describe the role of QA to ensure that any contractors conform to the requirements of the methodology, policies, and standards. 5.2 Policies

Describe QA's role in developing policy statements that expand, enhance, or clarify SDLC requirements, or that introduce new or enhanced standards.

5.3

Standards

Describe requirement of QA to ensure that standards are identified that establishes the prescribed methods for development of work products. Also discuss QA's role in assessing standards for adequacy and applicability. 5.4 Tools

Describe the tool sets that QA employs in the conduct of administrative and technical functions.

APPENDIX C-9 CONCEPT OF OPERATIONS (CONOPS)


1.0 INTRODUCTION

The Concept of Operations (CONOPS) document is a high-level requirements document that provides a mechanism for users to describe their expectations of the system. The CONOPS is used as input to the development of formal testable system and software requirements specifications. The objective of the CONOPS is to capture the results of the conceptual analysis process. During this process, the characteristics of the proposed system (from the user's perspective) and the operational environment in which it needs to function are identified. Both of these aspects, the system's functionality and its operational environment, are equally important. The CONOPS has the following characteristics: Describes the envisioned system Identifies the various classes of users Identifies the different modes of operation Clarifies vague and conflicting needs among users Prioritizes the desired and optional needs of the users

Supports the decision-making process that determines whether a system should be developed Serves as the basis for the Functional Requirements Document (FRD)

The outline of the CONOPS is shown at the end of this appendix. The following paragraphs list the specific instructions for the subsections of the outline. 1.1 Project Description

In this section, provide a brief overview of the project. 1.1.1 Background Summarize the conditions that created the need for the new system (or capability). 1.1.2 Assumptions and Constraints

Assumptions are future situations, beyond the control of the project, whose outcomes influence the success of a project. The following are examples of assumptions: Availability of a hardware/software platform Pending legislation Court decisions that have not been rendered Developments in technology

Constraints are conditions outside the control of the project that limit the design alternatives. The following are examples of constraints: Government regulations Standards imposed on the solution Strategic decisions

Be careful to distinguish constraints from preferences. Constraints exist because of real business conditions. Preferences are arbitrary. For example, a delivery date is a constraint only if there are real business consequences that can happen as a result of not meeting the date. For example, if failing to have the subject application operational by the specified date places the company in legal default, the date is a constraint. A date chosen arbitrarily is a preference. Preferences, if included in the CONOPS, should be noted as such. 1.2 Overview of the Envisioned System

1.2.1 Overview Give a brief overview of the envisioned system. 1.2.2 System Scope Give an estimate of the size and complexity of the system 1.3 Document References

List the documents that were sources of this CONOPS. Include meeting summaries, white paper analyses, and SDLC deliverables, as well as any other documents. 1.4 Glossary

Provide a glossary of terms used in the document. This may be provided as an appendix. 2.0 GOALS, OBJECTIVES, AND RATIONALE FOR THE NEW SYSTEM

2.1

Goals and Objectives of the New System (or Capability)

Define the overall goals and objectives of the new system (or capability). State the business problems that will be solved. 2.2 Rationale for the New System (or Capability)

State the justification for the new system or capability. If the need is for a new capability of an existing system, clearly identify the existing systems that must be changed. 3.0 WORK PROCESSES TO BE AUTOMATED/SUPPORTED

Describe each major process and the functions or steps performed during each work process. State the processes and functions in a manner that enables the reader to see broad concepts decomposed into layers of increasing detail. Then show, in a diagram, the sequence of the process steps described above high-level functional requirements. 4.0 4.1 HIGH-LEVEL FUNCTIONAL REQUIREMENTS High-Level Features

In this section: Describe the features, capabilities, and functions of the system. Describe major system components and interactions.

Describe all requirements for interfaces with external systems. Describe, at a high-level, data that the system must send to or receive from other systems. These must be phrased as requirements (e.g., must have definitive word "shall") and have a reference number. 4.2 Additional Features

Describe any additional features that would enhance the utility or usability of the system. 4.3 Requirements Traceability

Show the traceability to the requirements. In addition, state the "child" documents, such as FRDs, that would have requirements that trace back to this one. 5.0 5.1 HIGH-LEVEL OPERATIONAL REQUIREMENTS Non-functional Requirements

The following should be discussed in general, not as detailed systems specifications. Add subsections accordingly. 5.2 Performance Accessibility Portability Security System survivability Other Deployment and Support Requirements

In this section: Describe deployment considerations such as acquisition of business data to support the system including data cleansing and loading. Describe requirements for support of the system such as maintenance organization and help desk. 5.3 Configuration and Implementation

Describe the operational policies and constraints that affect the proposed new system (or capability). 5.4 System Environment

Describe the environment in which the system will operate. 6.0 6.1 USER CLASSES AND MODES OF OPERATION Classes/Categories of Users

Identify and describe the major classes/categories of users that will interact with the new system (or capability). 6.2 User Classes Mapped to Functional Features

This section provides an explanation of how the system will look to each user organization (or example, Regional Office, District Office, or different programs). Define variations (if any) in the user work process that correspond to the use of the system by the different classes of users.

6.3

Sample Operational Scenarios

Develop sample usage scenarios (as realistic as possible) for each major user class that show what inputs will initiate the system's functions, how the user will interact with the system, and what outputs are expected to be generated by the system. 7.0 7.1 IMPACT CONSIDERATIONS Operational and Organizational Considerations

Describe the impacts to existing operations and organizations. 7.2 Potential Risks and Issues

Describe any potential risks and issues associated with the development of the envisioned system. Describe any other consideration, such as project schedule or staffing support and recommended implementation approach. Allocate subsections for each consideration, as necessary.

APPENDIX C-10 SYSTEM SECURITY PLAN


To carry out its wide ranging responsibilities, the Company, employees and managers have access to diverse and complex information technology (IT) systems which include mainframe central processing facilities, local and wide area networks running various platforms, and telecommunications systems to include communication equipment. The various business and law enforcement functions within the company depend on the confidentiality, integrity, and availability of these systems and their data. Company relies on its IT systems, including the <enter system name here including any acronyms>, to accomplish its mission of providing cost effective services to organizations. Enter brief description of what the system/application does here and whether or not it processes sensitive information, in accordance with: OMB Circular A-130, Appendix III, Security of Federal Automated Information Resources; Public Law 100-235, Computer Security Act of 1987; Public Law 93-579, The Privacy Act of 1974; and NIST Special Publication 800-18, Guide for Developing Security Plans for Information Technology Systems. This System Security Plan presents in place and planned controls for ensuring protection of <enter system name here or acronym>. 1.0 1.1 SYSTEM IDENTIFICATION System Name/Title

Include in this section the system name and any acronyms for the system. 1.2 Responsible Organization

Include the organization name who is responsible for the security of the system. Include any information about other offices or organizations who have management or security control over the system (e.g., IT). Be sure to include the complete address of where the application owner resides and include the phone number. 1.3 Information Contacts

Include the: Name Titles Complete Phone

address number

Roles of individuals responsible for the administration of the system. Briefly describe what their roles and responsibilities are for the system. 1.4 Assignment of Security Responsibility

Include the: Name Title Complete address Phone number Role of the individual responsible for the security of the system. Provide a brief description of their security role 1.5 Category

<Enter name or acronym here> meets the criteria for a system. 1.6 System Operational Status

Enter whether the system is: Operational Under Undergoing a Major Modification Development, or

If more than one status is applicable, include which part of the system is covered under each status. 1.7 General Description and Purpose

Enter a brief, 1 to 3 paragraph, description of the function and purpose of the system. Note: Be sure to discuss: The function or purpose of the system and the information that is processed by the system; List of user organizations and whether they are internal or external to the system owner's organization; and The processing flow of the system. Include where data is gathered and input into the system and where the output goes after processing. If the system provides data to another system or system be sure to note that information in this section. 1.8 System Environment and Special Considerations

Enter a brief, 1 to 3 paragraph description of the system environment and any special considerations needed for the system.

Note: This section should contain the following information: Describe the primary computing platform(s) used (e.g., mainframe, mini computer, microcomputer(s), local area network (LAN), or wide area network (WAN)). Include a general description of the principle components, including hardware, software, and communications resources. Discuss the type of communications included (e.g., dedicated circuits, dial circuits, public data/voice networks). Include any security software protecting the system and information it processes. Include any environmental factors that raise special security concerns, such as: The system is connected to the Internet; It is located in a harsh or overseas environment; Any configuration management issues; The software resides on an open network used by the general public; The application is processed at a facility outside of the agency's control; or The general support mainframe has dial-up lines. 1.9 System Interconnection/Information Sharing

Describe any system interconnection or direct connections to the system. Note: This section should contain the following information: List of the connected systems or major application systems and their identifiers. Description of interconnections with external systems not covered by the System Security Plan and any security concerns. Description of any written authorizations (i.e., MOUs or MOAs) that are in place and the rules of behavior that have been established with the interconnected site. 1.10 Applicable Laws, Directives, and Regulations Affecting the System

The following Federal laws, directives, regulations, and company policy provide guidance pertaining to the security of company automated information systems, including <enter application name or acronym here>: Public Law 93-579, The Privacy Act of 1974. Public Law 100-235, Computer Security Act of 1987. Public Law 99-474, Computer Fraud Act of 1986. OMB Circular A-123, Management Accountability and Control, June 21, 1995. OMB Circular A-130, Appendix III, Security of Federal Automated Information

Resources, February 6, 1996. NIST Special Publication 800-18, Guide for Developing Security Plans for Information Technology Systems, August 14, 1998. DOJ Order 2640.2D, Information Technology Security, July 12, 2001. 2.0 2.1 SENSITIVITY OF INFORMATION PROCESSED Description of Data Processed

The <enter name or acronym here> processes and transmits unclassified sensitive data which requires protection for confidentiality, availability, and integrity. Sensitive data processed by the <enter name or acronym here> consists of <describe data or include data elements here>. 2.2 Information Sensitivity

<Enter name or acronym here> requires protection of the sensitive information contained within the system. Note: Provide protection ratings for each of these three categories: Confidentiality: The system contains information that requires protection from unauthorized disclosure. Integrity: The system contains information which must be protected from unauthorized, unanticipated, or unintentional modification, including the detection of such activities. Availability: The system contains information or provides services which must be available on a timely basis to meet mission requirements or to avoid substantial losses. The ratings for the application are: High a critical concern of the system; Medium - an important concern but not necessarily paramount in the organization's priorities; or Low - some minimal level of security is required but not to the same degree as the previous two categories. Enter check marks in the columns of Exhibit 2.3-1 that best describe the system ratings. Exhibit 2.3-1, Relative Importance of Protection Needs, presents the protection needs of <enter name or acronym here>.

Exhibit 2.3-1, Relative Importance of Protection Needs High Medium (Critical Concern) (Important Concern) Confidentiality Integrity Availability 3.0 3.1 MANAGEMENT CONTROLS Risk Assessment and Management Low (Minimal Concern)

Threats to the confidentiality, integrity, or availability of <enter name or acronym here> were assessed during the development life cycle. Security controls were considered and included in its design. A risk assessment was performed by <enter organization name here> on <enter date of last risk assessment effort here>. Describe the risk assessment methodology used to identify the threats and vulnerabilities of the system. Note: If no risk assessment has been done on the system, include a milestone date (month and year) for when the assessment will be completed. 3.2 Review of Security Controls

An independent management review or audit of the security controls was performed on <enter date of independent security review here> by <enter name of corporation or entity that performed the review> for <enter name or acronym here>. The independent security review will be conducted every three years in accordance with OMB A-130 requirements. Note: An independent security review must be performed at least every three years. This review or audit should be independent of the manager responsible for the application. If your system is perceived as high risk, reviews may need to be conducted more frequently. Include information about the last independent audit or review of the system, type of review, and who conducted the review. Discuss any findings or recommendations from the review and include information concerning correction of any deficiencies or completion of any recommendations. 3.3 Rules of Behavior

A set of rules of behavior has been established in writing for individual users of <enter system name or acronym here> concerning use of, security in, and the acceptable level of risk for the system. Note: Requirements for the rules are as follows:

Delineate responsibilities and expected behavior of all individuals with access to the application. Responsibilities and expected behavior should be based on the needs of the various users of the application and be only as stringent as necessary to provide adequate security for the information in the application; Reflect the technical controls in the application. For example, rules regarding password use should be consistent with technical password features; Cover work at home, dial-in access, connection to the Internet, use of copyrighted works, unofficial use of government equipment, the assignment and limitation of system privileges, and individual accountability; State the consequences of behavior not consistent with the rules. For instance, the rules may be enforced through administrative sanctions specifically related to the system or through more general sanctions as are imposed for violating other rules of conduct; Form the basis for security awareness and training; Comply with system-specific policy as described in, An Introduction to Computer Security: The NIST Handbook (March 16, 1995); Define appropriate limits on interconnections to other systems; Define service provision and restoration priorities; Be in writing; Be made available to every user prior to receiving authorization for access to the system; Include limitations on changing information, searching databases, or divulging information; and Specifically address restoration of service as a concern to all users of the system. If the set of rules, for the system, is contained in a separate document, attach the document as an Appendix to the plan and reference the appendix number in this section. 3.4 Planning for Security in the Life Cycle

Determine which phase(s) of the life cycle that the system, or part of the system is in. Describe how security has been implemented during the life cycle. Note: The life cycle phases, and what should be document is as follows: Initiation Phase Was security requirements identified and documented during the system design? Were security controls and evaluation and test procedures developed before an RFP was issued? Did the solicitation documents (RFPs) include security

requirements and evaluation/test procedures that vendors must comply with when providing the software/hardware? Implementation Phase: Were design reviews and system tests run prior to placing the system into production? Were the tests documented? Has the system been certified? Have additional security controls been added since the system was developed? Has the system undergone a technical evaluation to ensure that it meets applicable regulations, Federal laws, regulations, policies, guidelines, and standards? Include the date of the certification and accreditation. If the system has not been authorized yet, include a date when accreditation request will be made. Operation/Maintenance Phase: Document the security activities during this phase in the security plan. Disposal Phase: Describe how the data contained within the system is moved to another system, archived, discarded, or destroyed. Controls used to ensure confidentiality of the data during this phase. Is the sensitive data encrypted? How is the information cleared and purged from the system. Is information or media purged, overwritten, degaussed, or destroyed in some manner? Authorize Processing: Provide the date of authorization, name, and title of management official authorizing processing of the system. 3.5 Security Control Measures

Security control measures implemented or planned to meet <enter name or acronym here> security requirements are addressed in the following control categories, in accordance with OMB A-130: Operational Controls - Controls that include physical and environmental protection, emergency planning, audit and variance detection, maintenance of application software, documentation, and periodic checks for viruses. Technical Controls - Controls to protect against unauthorized access or misuse and to facilitate detection of security violations.

For each requirement, a status is indicated as to whether the control measure(s) is: In Place - controls that are operational and judged to be effective. Planned - controls are specific control measures (e.g., new, enhanced) that will be implemented for the system. Note: For each security control, check off the appropriate box that pertains to that security control and whether it is in place, planned, in place and planned, or not applicable. Each section has an example of what the security control response should look like. In some instances we responded with a positive response (i.e., security control mechanism is in place). In some instances we showed what the response would look like if the activity was on going (i.e., security control mechanism is planned). These examples are shown to assist the preparer in completing the remainder of the application security plan. 4.0 OPERATIONAL CONTROLS

These are the day-to-day procedures and mechanisms used to protect production applications. 4.1 Personnel Security

Appropriate reference and/or background checks should be conducted for employees and contractors who have the capability to circumvent controls or who have access to sensitive data. In Place Controls: All employees and contractors are subject to background investigations prior to employment. Individuals occupying positions with a higher level of sensitivity designation are subject to more in-depth background investigations. 4.2 Physical and Environment Protection

The physical and environmental protection of the equipment and peripherals supporting <enter system name or acronym here> operations, including all physical protection devices such as fire extinguishers, locks, or video cameras, must be ensured. In Place Controls: Describe all physical and environmental protection mechanisms in place. Note:Discuss the physical protection in the area where application processing takes place (e.g., locks on terminals, physical barriers around the building and processing area). Some examples of physical/environmental controls include: Card Cipher Humidifiers; keys; locks;

Fire Surge Uninterruptable power Controlled access by security Intrusion, smoke, water, and heat detectors 4.3 Production Input and Output Controls

extinguishers; suppressors; supply; guards; and,

Formal procedures for handling of input data by properly screened persons and for maintaining an audit trail should be prepared. Printouts and storage media should be identified as to content and sensitivity and sensitive/critical data media should be securely stored. Data media and storage containing sensitive data should be overwritten before it is released outside the original owner's control. In Place Controls: All users handling input data are properly screened. The audit trail capability provided by the computer and network operations system for <enter name or acronym here> have been implemented to ensure individual accountability. Planned Controls: Development of procedures for identifying printouts and storage media as to the contents and their sensitivity as well as a policy regarding audit data retention is planned. Note: Describe the security controls used for marking, handling, processing, storage, and disposal of input and output information. Also describe the labeling and distribution procedures for information media. The following topics should be discussed in this section: User support/help desk capabilities; Procedures to ensure authorized users only have access to data they need and unauthorized users are denied access; Procedures for ensuring that only authorized users pick-up, receive, or deliver input and output information and media to the major application system; Audit trails for receipt of sensitive data for both input and output data. External and internal labeling for sensitivity (i.e., Privacy Act, or proprietary labeling are examples); Media storage and environmental protection and controls of the off-site storage site, if applicable. Procedures for sanitization of electronic media; Shredding or other destructive measures for hardcopy media when no longer needed or which needs protection because of the sensitivity of the data contained in the printed material; and Destruction of softcopy media that is no longer in use or is not usable. 4.4 Contingency Planning

Contingency plan and disaster recovery procedures should be developed to provide reasonable assurance that critical data processing support can be continued or resumed quickly if normal system operations are interrupted. Planned Controls: The Contingency Plan is in the process of being completed. The completed Contingency Plan will contain all the following elements: Any agreements for backup processing; Documented backup procedures including frequency and scope; Coverage of backup procedures; Location of stored backups; Generations of backups kept; Contingency plan testing (is testing done and how often); and Personnel trained on implementation of the plan. Note: Ensure that the contingency plan contains the information listed above. If not, show a date as to when the changes will be made to the existing contingency plan. The sample shown for this section shows a planned activity. 4.5 Application Software and Maintenance Controls

Controls are needed to monitor the installation of, and updates to, application software to ensure that the software functions as expected and that a historical record is maintained of application changes. In Place Controls: Application software and maintenance controls exist for <enter name or acronym here>. Note: Describe the software configuration policy and products and procedures used in auditing for, or preventing, illegal use of shareware or copyrighted software. Address the following: Was the application software developed in house or under contract? Does the Government own the software? Was the application software received from another Federal agency with the understanding that it is Federal Government property? Is the application software a copyrighted commercial off-the-shelf product or shareware? Describe the change control process. Are all changes to the application software documented? Are test data "live" or made-up data? Are test results documented?

How are emergency fixes handled? Is there an organizational policy in place against illegal use of copyrighted software or shareware? Are software warranties managed to minimize the cost of upgrades and cost reimbursement or replacement for deficiencies? 4.6 Documentation

Descriptions of the hardware/software policies and procedures related to the security of the application, descriptions of software, hardware, and configuration or cable charts should be available for those who need them. In Place Controls: Documentation on <enter name or acronym here> application usage and related processing policies and procedures have been developed and are updated periodically. Planned Controls: Development of documentation on managing and administering the security of <enter name or acronym here> are planned. Note: The major application system documentation should include the following: Vendor supplied documentation; Application requirements; Application security plan; General support system security plan; Application program documentation and specifications; Testing procedures and results; Standard operating procedures; Emergency procedures; Memoranda of Understanding (MOU) with interfacing systems; User rules/procedures; User manuals; Risk Assessment; Backup procedures; Certification documents and statements; Accreditation statements; and Contingency plans. 4.7 Security Awareness and Training

A security awareness and training program has been established to ensure that employees, contractors, and volunteer personnel are aware of their security responsibility and how to fulfill them. The program should include new employee security briefings, annual refresher training, and training for information technology and security personnel. In Place Controls: A security awareness and training program has been implemented to ensure that all employees, contractors, and volunteer personnel are aware of their security responsibilities and how to fulfill them. The end-user awareness training is

conducted on an on-going basis. Information technology and security personnel attend conferences and specialized training to obtain additional security training. Note: This section should include the following: Type of awareness program in place for the system (i.e., posters, booklets, formal training, etc.) Type and frequency of the system-specific training to employees and contractor personnel. This can include seminars, workshops, formal classroom, focus groups, role-based training, on-the-job training, and one-on-one training. Procedures to ensure that employees and contractors have been provided adequate training on the system. 5.0 TECHNICAL CONTROLS

Controls within the application that enable access control and audit of user activities, and controls provided by the general support system which augment application security. 5.1 User Identification and Authentication

Controls must be in place to verify the identity of a station, originator, or individual before allowing access to the application. In Place Controls: <Enter name or acronym here> security controls have been configured to require all users to enter a valid ID/password combination for user identification and authentication prior to allowing access. Describe all the identification and authentication mechanisms in place for the system. Note: In this section describe the system's access controls. Include the following information: Describe the method of user authentication (i.e., password, token, or biometric). If a password system is used, provide the following specific information: Allowable character set; Password length (i.e., minimum and maximum); Password aging time frames and enforcement approach; Number of generations of expired passwords disallowed for use; Procedures for password changes; Procedures for handling lost passwords; and, Procedures for handling password compromise. Indicate the frequency of password changes, describe how password changes are enforced, and identify who changes the passwords. State the number of invalid access attempts that may occur for a given user

identifier or access location and describe the actions taken when that limit is exceeded. Describe any policies that provide for bypassing user authentication requirements, single sign-on technologies and any compensating controls. 5.2 Logical Access Control

Standards and procedures relating to user ID, password, and access privileges should be developed to maintain and enforce security. In Place Controls: Company have established accounts management procedures applicable to all systems and applications, including <enter name or acronymn here>. 5.3 Public Access Controls

Additional security controls are needed to protect the integrity of the system if the public is allowed access. In Place Controls: <Enter name or acronym here> does not allow access by the general public. Note: If this is a public access system, list and discuss the controls that provide protection to the system. Examples of public access controls may include: Some form of identification and authentication; Access control to limit what the user can read, write, modify, or delete; Controls to prevent public users from modifying information on the system; Prohibit public access to "live" databases; Verify that programs and information distributed to the public are virus-free; and, Audit trails. 5.4 Audit Trail

An automated mechanism should be operational that will create, maintain, and protect an audit trail of the client and administrators actions so that all security relevant events can be traced to a specific client for accountability. The audit trail mechanism should record failed logon attempts. In Place Controls: <Enter name or acronym here> has an automated audit trail mechanism that records security relevant events. The audit data includes failed and successful user logons, failed and successful access and operations on data (by session), changes to a user's security privileges, changes to the security configuration, and creation and deletion of objects. Note: In this section describe the measures used to protect system usage and

audit trail logs. Discuss the following: System usage and audit trail logs produced by the general support system on which the system resides. Linkages between logs maintained by the system and the supported application system. How the actions of individual users of the system may be monitored through the system logs. Procedures used for reviewing logs and audit trail reports, and how long such logs are stored. 5.5 Complementary Controls Provided by Support Systems

Insert any complementary controls that are in place for the network or general support system or where the system is operated outside of your management control. In Place Controls: <Enter system name or acronym here> is protected from potential abuse or misuse from outside entities by the firewall that is in place on the company WAN connection to the Internet. Note: In this section discuss the following: List any controls in place on the general support system that add protection for the application. If the application is processed on another system(s), include the name of the organization, system name, and if one exists, its unique system identification. Indicate how the application owner has acknowledged understanding of the risk to application information that is inherent in processing on the general support system or in transmitting information over networks.

APPENDIX C-11 PROJECT MANAGEMENT PLAN


The Project Management Plan is prepared for all projects. It is one of several key project-planning documents that use a building-block approach to planning. It is a vehicle for documenting project scope, tasks, schedule, allocated resources, and interrelationships with other projects. It also provides details on the involved functional units, required job tasks, cost and schedule performance measurement, and milestone and review scheduling. Revisions to the Project Management Plan occur at the end of each phase and as information becomes available. Software tools designed for work breakdown structures (WBSs), Gantt charts, network diagrams, and activity detail reports are available and should be used to complete the project management plan. The size of the project management plan should be commensurate with the size and complexity of the systems development effort and should generally follow the outline attached. 1.0 INTRODUCTION

This section is a description of the project management plan purpose and scope. 1.1 Project Description

This section provides a description of the project in as much detail as is required to understand the nature of the project. Identify the project name and code, state the project's objective(s), and give the date the plan was finalized in the Planning Phase. 1.2 Project Background

This section describes why the project is important to the organization, its mission, and the capabilities the project will provide to the organization. Include any background or history that is important to understanding the project. 1.2.1 Project Development Strategy This section provides an overview of the development strategy selected for the project. For example, this strategy might include prototyping the system, use of commercial off-the-shelf software, or conversion of an existing system from one hardware and software family to another. 1.2.2 Organization of the Project Plan This section briefly describes the organization of the Project Management Plan. 1.3 Points of Contact

This section identifies the key points of contact for the project management plan, including the System Proponent, IRM Manager, and Project Manager. Identify any additional points of contact. 1.4 Project References

This section is a bibliography of key project references and deliverables produced before this point. For example, these references might include cost-benefit analyses, existing documentation describing internal processes, or existing documentation of the system if the project is a conversion. 1.5 Glossary

This section provides a glossary of all terms and abbreviations used in the plan. If the glossary is several pages in length, include it as an appendix. 2.0 ORGANIZATION AND RESPONSIBILITIES

This section should include the various organizations and staff titles, roles, and responsibilities involved in the development project. Describe team structures, reporting responsibilities and relationships, and guidelines for status reporting internally within the Information Resources Management Office and externally for any contractor support. Also provide a roles and responsibilities matrix. Identify the following key organization components: Organization sponsor for the project Manager responsible for the day-to-day administration of the project (if different from the sponsor) Quality Assurance (QA) organization Configuration Management (CM) organization 3.0 PROJECT DESCRIPTION, SCHEDULE, AND RESOURCES

This section lists all tasks/activities to be completed within each phase of the project. If possible, use diagrams and tables (automated tools) to list the tasks and show any possible relationships among them. Repeat any subsection for each known task within the project. This section should provide a detailed description of each task and its schedule, budget, and management. Also include an estimate of each software development phase-related work effort and deliverables. Note: The actual structure of this subsection may be organized as best suits the project. For example, suppose the project has activities in the Requirements Definition and Design Phases. Sections 3.1, Project Work Breakdown Structure, through 3.7, Risk Management, could be repeated for each of these phases; or Sections 3.1 through 3.7 could be listed once, and subsections within those sections could address the two phases separately. 3.1 Project Work Breakdown Structure

This section describes the WBSs required for the project. The WBS is a family-tree structure that relates to products produced and tasks performed at the various phases of the project life cycle. A WBS displays and defines the product(s) to be developed or produced and relates the elementsof work (tasks) to be accomplished to each other and to the end product(s). Typically, three levels of WBSs are developed during the system development process : Summary, Project, and Contract. A WBS Dictionary is also helpful for creating and recording the WBS elements. 3.1.1 Summary Work Breakdown Structure This section describes the Summary WBS, a high-level WBS that covers the first three levels of the Project WBS. The Summary WBS is used for management presentations but is not used for detailed day-to-day project management. The structure of the Summary WBS may vary depending on the nature of the project. 3.1.2 Project Work Breakdown Structure This section describes the Project WBS, the detailed WBS that is used for the day-today management of a project. The Project WBS includes all important products and work elements, or tasks, of the project, regardless of whether these tasks are performed by company personnel or by a contractor. The Project WBS may be modified, if necessary, during the life cycle. Work elements requiring more than 2 person weeks of calendar time should be subdivided until the largest bottom-level work element represents work that can be accomplished in an interval of at least 1 person week, but not more than 2 person weeks. This subdivision may appear to be arbitrary; however, the bottom-level work elements should focus on finite tasks performed by a single individual. When that is done, the application of standard productivity rates can generally predict the expected duration of the work element and eliminate wide variation in work element duration. For a software system development project, the structure of the Project WBS should also reflect the project life-cycle approach. The structure of the Project WBS may vary depending on the nature of the project and should be customized by the Project Manager to reflect the particular project and the particular path through the life cycle. For example, a fullscale initial information system development project and a software conversion process would be expected to have somewhat different WBSs. 3.1.3 Contract Work Breakdown Structure This section describes the Contract WBS (CWBS), a further breakdown of the contract-specific WBS that covers the products and work elements, or tasks, from the Project WBS that will be performed by a contractor. In addition to items derived from the Project WBS, the CWBS includes contractor-specific items that may not be reflected in the Project WBS. Depending on the nature of the project, the contractor may be responsible for a given part of the project development activities (such as QA), for a specific part of the development life cycle (such as the Requirements Definition phase), or for the entire development process. A preliminary CWBS may be specified in the acquisition plan. The contract line items, configuration items,

contract work statement tasks, contract specification, and contractor responses will typically be expressed in terms of the preliminary CWBS.

3.1.4 Work Breakdown Structure Dictionary A WBS Dictionary provides detailed descriptions of each WBS element. Each WBS Dictionary entry should contain a title that is the same as the WBS element it amplifies; a narrative describing the work represented by the element; the effort required (in person hours); the most likely duration (in calendar days); and references to any special skills or resources required to accomplish the work. WBS Dictionary entries should be completed only for the lowest-level WBS elements. Create one or more WBS and a WBS dictionary and generate the output in the form of graphic charts. 3.2 Resource Estimates

This section should estimate, for each WBS element, the total amount of human resource effort required, by resource category. Use available tools to estimate, store, and output resource requirements per WBS element to use in the next component of the project plan. 3.3 Schedule

This section presents the project schedule. Assumptions made about task duration, resource availability, milestones, constraints, and optimization criteria should be carefully documented. Provide the schedule in the forms of Gantt charts, milestone tables, and deliverables and dates lists. 3.4 Acquisition Plan

This section describes the addition (and eventual departure) of project resources via the Acquisition Plan (See Appendix C-6). Each type of resource should be considered in this Acquisition Plan. The plan should specify the source of the resources, the numbers of each, the start date for each, and the duration needed. It also should consider additional, associated resource requirements, such as space, hardware and software, office equipment, other facilities, and tools. 3.5 Communication Plan

This section should discuss frequencies, target audiences, media, sources, formats, locations, forms, and types of information delivered in each form of communication. Careful thought should be given to satisfying existing standards and following existing conventions, and consideration should also be given to improving the communication process in general and to ensuring that communication is enabled and simplified for all project team members and external entities. Periodic status reports, newsletters, bulletins, problem reports, issues lists, status and review meetings, team meetings, and other forms of communication should all be carefully considered and documented when creating the communication plan. Output the communication plan in the form of a communication item/audience matrix.

3.6

Project Standards and Procedures

While the estimating and scheduling activities are going on, assign members of the planning team to identify and document standards and procedures for the project team. These standards and procedures may already be in place Agency-wide, but, if not, they should be discovered and/or determined now. These include technical standards, business-related standards, and QA standards. Technical standards and procedures include such things as naming conventions, walk-through requirements, CM rules, security standards, documentation requirements, tools, modeling techniques, and technical contractual obligations. Business-related standards and procedures include such things as procedures for scope changes, requirements changes, costing, earned value implementation, and sign-off standards. QA standards and procedures include such things as review processes, testing procedures, and defect tracking requirements. QA may also provide standards on the use of metrics. 3.7 Risk Management

For this section, refer to Appendix C-5, Risk Management Plan, created during the System Concept Development Phase. Address approaches for mitigating the effects of these factors. Add subsections as necessary to separate different categories of risk or different risk-inducing factors. 4.0 SECURITY AND PRIVACY

This section should review security and privacy requirements for the project and should ensure that the Project Management Plan reflects these requirements. 4.1 Privacy Issues

This section identifies privacy issues that should be addressed during the phases of the information system development effort and define the process to be established for addressing the privacy issues throughout the life cycle. It is important that there be a preliminary analysis of the potential privacy effects of the proposed information system. The purpose will be to establish for the project team and the review process an awareness of the privacy-related issues that will have to be addressed as the system is planned, developed, and implemented. 4.2 Computer Security Activities

This section reviews and evaluates requirements for security risk assessment and computer security planning to determine that all system vulnerabilities, threats, risks, and privacy issues will be identified and that an accurate determination will be made of the sensitivity of the system and information. Refer to Chapter 4, System Concept Development Phase; Chapter 5, Requirements Definition Phase; and Chapter 6, Design Phase, for more information on security considerations.

APPENDIX C-12 VERIFICATION AND VALIDATION PLAN


This plan is from the FIPS PUB 101, June 1983, Guideline for Life cycle Validation, Verification, and Testing of Computer Software. 1.0 BACKGROUND AND INTRODUCTION

This section establishes the context for the V&V document. It is brief and introductory in nature. It focuses on those aspects of the problem and/or solution which influence the V&V needs and approach. 1.1 Statement of problem

Describe the problem in concise terms.. 1.2 Proposed solution

Describe the proposed solution which influences the V&V needs and approach. 1.3 References/related documents

List all documents referenced in the plan or that impact the V&V effort. For selected significant documents, briefly describe how the referenced document impacts the V&V effort. 2.0 V&V REQUIREMENTS AND MEASUREMENT CRITERIA

This section presents the V&V requirements in one of several formats: the total V&V requirements, with all worksheets and phase information; a summary of requirements information; and a statement of project level information, with phase data presented later. 2.1 V&V Requirements and their Importance

Briefly describe the functional, performance, reliability or other requirements and why they are important to this system or subsystem. 2.2 Measurement Criteria for each Requirement

Briefly describe general, product specific or phase specific measurement criteria for the above requirements. 2.3 References/related documents

List any documents that the above might be referenced in (if a large document). These could be the SBD, CBA, Feasibility Study, etc.

3.0

PHASE BY PHASE V&V PLANS

First, describe the V&V approach by phases, products, major reviews and checkpoints, and practices common to all phases. Then present the specific activities to be carried out phase by phase. 3.1 3.2 Project Background and Summary Information Project Phases Major reviews (both management and technical) Planning phase V&V Activities and products

For each phase, list the following information. 3.3 3.4 3.5 3.6 3.7 3.8 V&V V&V reviews Required Roles Schedules Budgets Personnel techniques methods support tools, and and tools of automated activities selected analysis & other responsibilities

Requirements Analysis Phase Design Phase Development Phase Integration and Test Phase Implementation Phase Operations and Maintenance Phase and Environmental Technical Project Considerations Issues Constraints

Appendix A Project A. B. C. Computing Resources

Appendix B Technique and Tool Selection Information A. Candidate list of techniques and tools B. Rationale for selection of techniques and tools

APPENDIX C-13 SYSTEMS ENGINEERING MANAGEMENT PLAN


The following sections should be considered for inclusion in the SEMP. Additionally, if both a government and a contractor SEMP are needed, the contents should be coordinated and integrated such that the technical management plans for the project are unambiguous to all concerned. 1.0 Introduction.

This section identifies the project, the applicable organization (contractor, government, or both), the date approved, and the revision version. If applicable, the title page should show an internal document control number. 1.1 Executive Summary.

This section describes the technical plan summary and applicability. If applicable, it also lists major subordinate plans. 1.2 1.3 Table of Contents. Project Summary.

Briefly describe the project, to include complexities and challenges that are addressed by the technical development effort. 1.4 Scope.

Describe the applicability of the SEMP and assign general responsibilities for the required activities shown in the SEMP. 1.5 Applicable Documents.

List by title, version number, and issue date applicable documents to the technical management efforts described in the SEMP. 2.0 System Engineering Process.

This section describes the system engineering process to be applied to the project and assigns specific organizational responsibilities for the technical effort, to include contracted or subcontracted technical tasks. This section also details or references technical processes and procedures to be applied to the effort. Where these processes or procedures are developed as part of the project, the need dates and development schedule should be shown. 2.1 Systems engineering process planning.

This section addresses planning for the key system outputs, to include products, processes, and trained people. The following may be included in this section: -- Major products. Include major specification and product baseline development and control -- System engineering process inputs. Include major requirements documents and resolution instructions for conflicting requirements -- Technical objectives. Include cost, schedule, and key performance objectives -- Technical work breakdown structure. Describe how and when the technical work breakdown structure will be developed, to include development and tracking tool sets usage -- Subcontracted technical efforts. Describe the integration of contracted and subcontracted technical efforts -- Processes. Describe the use of established technical processes and standards on the project -- Process development. Describe processes to be developed as part of the project, together with the schedule for their development -- Constraints. List any significant constraint to the technical effort 2.2 Requirements Analysis.

This section describes the methods, procedures, and tools used to analyze project requirements. This section should specify specific tools to be used to capture and trace project requirements. 2.3 Functional analysis/allocation.

This section addresses the methods and tools used analyze the project requirements and allocate them down into project component functional requirements. 2.4 Synthesis.

This section addresses the methods and tools used to analyze the functional requirements and allocate those requirements to a physical project component. 2.5 System Analysis.

This section describes the processes and procedures to be used for formal and informal trade studies, to include system and cost-benefit effectiveness analyses. Also included in this section are the risk management approaches to be used on the project. 2.6 System Control.

This section describes the control strategies needed for the following: -- Configuration management -- Data management -- Technical performance measurement -- Quality control -- Interface management -- Schedule tracking and control -- Formal technical reviews -- Informal technical reviews/interchanges -- Subcontractor/supplier control -- Requirements control 3.0 Technology Refreshment.

This section describes the plans to establish and maintain a viable technological baseline during project development. This section also describes the strategy to be used during development to ensure refreshment remains a viable option during future project operations. 4.0 Implementation Planning.

This section outlines the planned activities leading to project implementation. Included in this area may be plans for: -- Test and evaluation -- Transition of technical baselines to operations and maintenance -- Supportability planning -- Facilities planning -- Operations and user training development -- Project integration into an existing system-of-systems architecture 5.0 Specialty Engineering Planning.

This section outlines any special subject applicable to the project not covered in a previous section.

APPENDIX C-14 FUNCTIONAL REQUIREMENTS DOCUMENT


1.0 INTRODUCTION

A requirement is a condition that the application must meet for the customer to find the application satisfactory. A requirement has the following characteristics: It provides a benefit to the organization. That benefit is directly traceable to the business objectives and business processes in the 5-Year IT or Strategic Plan. It describes the capabilities the application must provide in business terms. It does not describe how the application provides that capability.

It does not describe such design considerations as computer hardware, operating system, and database design. It is stated in unambiguous words. Its meaning is clear and understandable. It is verifiable.

The functional requirements document (FRD) is a formal statement of an application's functional requirements. The FRD has the following characteristics: It demonstrates that the application provides value to the company in terms of the business objectives and business processes in the 5-year plan. It contains a complete set of requirements for the application. It leaves no room for anyone to assume anything not stated in the FRD. It is solution independent. The FRD is a statement of what the application is to do-not of how it works. The FRD does not commit the developers to a design. For that reason, any reference to the use of a specific technology is entirely inappropriate in an FRD. 1.1 Project Description

Provide a brief overview of the project. 1.1.1 Background

Summarize the conditions that created the need for the application. 1.1.2 Purpose Describe the business objectives and business processes from the CONOPS document and the CBA that this application supports. 1.1.3 Assumptions and Constraints Assumptions are future situations, beyond the control of the project, whose outcomes influence the success of a project. The following are examples of assumptions: Availability of a Pending Court decisions that Developments in technology hardware/software have not been platform legislation rendered

Constraints are conditions outside the control of the project that limit the design alternatives. The following are examples of constraints: Government Standards Strategic decisions imposed on the regulations solution

Be careful to distinguish constraints from preferences. Constraints exist because of real business conditions. Preferences are arbitrary. For example, a delivery date is a constraint only if there are real business consequences that can happen as a result of not meeting the date. For example, if failing to have the subject application operational by the specified date places the company in legal default, the date is a constraint. A date chosen arbitrarily is a preference. Preferences, if included in the FRD, should be noted as such. 1.1.4 Interfaces to External Systems Name the applications with which the subject application must interface. State the following for each such application: Name of application Owner of application (if external to the Company) Details of interface (only if determined by the other application)

1.2

Points of Contact

List the names, titles, and roles of the major participants in the project. At a minimum, list the following: 1.3 Project leader Development project leader User contacts Employee whose signature constitutes acceptance of the FRD Document References

Name the documents that were sources of this version of the FRD. Include meeting summaries, white paper analyses, CONOPS, CBA, and other System Development Life Cycle deliverables, as well as any other documents that contributed to the FRD. Include the Configuration Management identifier and date published for each document listed. 2.0 FUNCTIONAL REQUIREMENTS

The functional requirements describe the core functionality of the application. This section includes the data and functional process requirements. 2.1 Data Requirements

Describe the data requirements by producing a logical data model, which consists of entity relationship diagrams, entity definitions, and attribute definitions. This is called the application data model. The data requirements describe the business data needed by the application system. Data requirements do not describe the physical database. 2.2 Functional Process Requirements

Process requirements describe what the application must do. Process requirements relate the entities and attributes from the data requirements to the users' needs. State the functional process requirements in a manner that enables the reader to see broad concepts decomposed into layers of increasing detail. Process requirements may be expressed using data flow diagrams, text, or any technique that provides the following information about the processes performed by the application: Context Detailed view of the processes

3.0

Data (attributes) input to and output from processes Logic used inside the processes to manipulate data Accesses to stored data Processes decomposed into finer levels of detail OPERATIONAL REQUIREMENTS

Operational requirements describe the non-business characteristics of an application. State the requirements in this section. Do not state how these requirements will be satisfied. For example, in the Reliability section, answer the question, "How reliable must the system be?". Do not state what steps will be taken to provide reliability. Distinguish preferences from requirements. Requirements are based on business needs. Preferences are not. If, for example, the user expresses a desire for sub-second response but does not have a business-related reason for needing it, that desire is a preference. 3.1 Security

The security Section describes the need to control access to the data. This includes controlling who may view and alter application data. State the consequences of the following breaches of security in the subject application: Erasure of contamination of application data Disclosure of Government secrets Disclosure of privileged information about individuals

State the type(s) of security required. Include the need for the following as appropriate: State if there is a need to control access to the facility housing the application.

State the need to control access by class of users. For example, "No user may access any part of this application who does not have at least a (specified) clearance." of State the need to control access by data attribute. State, for example, if one group

users may view an attribute but may not update it while another type of user may update or view it.

State the need to control access based on system function. State for example, if there is a need to grant one type of user access to certain system functions but not to others. For example, "This function is available only to the system administrator." State if there is a need for accreditation of the security measures adopted for this application. For example, C2 protection must be certified by an independent authorized organization. 3.2 Audit Trail

List the activities that will be recorded in the application's audit trail. For each activity, list the data to be recorded. 3.3 Data Currency

Data currency is a measure of how recent data are. This section answers the question, "When the application responds to a request for data how current must those data be?" Answer that question for each type of data request. 3.4 Reliability

Reliability is the probability that the system will be able to process work correctly and completely without being aborted. State the following in this section: What damage can result from this system's failure? Loss of human life Complete or partial loss of the ability to perform a mission-critical function Loss of revenue Loss of employee productivity level of reliability?

What is the minimum acceptable State required reliability in any of the following ways:

Mean Time Between Failure is the number of time units the system is operable before the first failure occurs. is Mean Time To Failure is computed as the number of time units before the system operable divided by the number of failures during the time period. Mean Time To Repair is computed as the number of time units required to perform system repair divided by the number of repairs during the time period.

3.5

Recoverability

Recoverability is the ability to restore function and data in the event of a failure. Answer the following questions in this section: In the event the application is unavailable to users (down) because of a system failure, how soon after the failure is detected must function be restored? In the event the database is corrupted, to what level of currency must it be restored? For example "The database must be capable of being restored to its condition on no more than one hour before the corruption occurred." If the process site (hardware, data, and onsite backup) is destroyed how soon must the application be able to be restored? 3.6 System Availability

System availability is the time when the application must be available for use. Required system availability is used in determining when maintenance may be performed. In this section state the hours (including time zone) during which the application is to be available to users. For example, "The application must be available to users Monday through Friday between the hours of 6:30 a.m. and 5:30 p.m. EST." If the application must be available to users in more than one time zone state the earliest start time and the latest stop time. Include the times when usage is expected to be at its peak. These are times when system unavailability is least acceptable. 3.7 Fault Tolerance

Fault tolerance is the ability to remain partially operational during a failure. Describe the following in this section: Which functions need not be available at all times?

If a component fails what (if any) functions must the application continue to provide? What level of performance degradation is acceptable? For most applications, there are no fault tolerance requirements. When a portion of the application is unavailable there is no need to be able to use the remainder for the application.

3.8

Performance

Describe the requirements for the following: Response time for queries and updates Throughput Expected volume of data

Expected volume of user activity (for example, number of transactions per hour, day, or month) 3.9 Capacity

List the required capacities and expected volumes of data in business terms. For example, state the number of cases about which the application will have to store data. For example, "The project volume is 600 applications for naturalization per month." State capacities in terms of the business. Do not state capacities in terms of system memory requirements or disk space. 3.10 Data Retention

Describe the length of time the data must be retained. For example, "information about an application for naturalization must be retained in immediately accessible from for three years after receipt of the application." 4.0 REQUIREMENTS TRACEABILITY MATRIX

The requirements traceability matrix (RTM) provides a method for tracking the functional requirements and their implementation through the development process. Each requirement is included in the matrix along with its associated section number. As the project progresses, the RTM is updated to reflect each requirement's status. When the product is ready for system testing, the matrix lists each requirement, what product component addresses it, and what test verify that it is correctly implemented. Include columns for each of the following in the RTM: Requirement description Requirement reference in FRD Verification Method Requirement reference in Test Plan

Appendix A-Glossary

Include business terms peculiar to the application. Do not include any technologyrelated terms.

APPENDIX C-15 TEST AND EVALUATION MASTER PLAN


INTRODUCTION The Test and Evaluation Master Plan (TEMP) identifies the tasks and activities needed to be performed so that all aspects of the system are adequately tested and that the system can be successfully implemented. The TEMP documents the scope, content, methodology, sequence, management of, and responsibilities for test activities. The TEMP describes the test activities of the subsystem Integration Test, the System Test, the User Acceptance Test, and the Security Test in progressively higher levels of detail as the system is developed. The TEMP provides guidance for the management of test activities, including organization, relationships, and responsibilities. The test case procedures may be included in the TEMP or in a separate document, depending on system size. The users assist in developing the TEMP, which describes the nature and extent of tests deemed necessary. This provides a basis for verification of test results and validation of the system. The validation process ensures that the system conforms to the functional requirements in the FRD and that other applications or subsystems are not adversely affected. A test analysis report is developed at each level of testing to record the results of testing and certify readiness for system implementation (see the Integration and Test Phase). Problems, deficiencies, modifications, and refinements identified during testing or implementation should be tracked under configuration control and tested using the same test procedures as those described in the TEMP. Specific tests may need to be added to the plan at that time, and other documentation may need updating upon implementation. Notification of implemented changes to the initiator of the change request/problem report and to the users of the system is also handled as part of the configuration control process. 1.0 PURPOSE

In this section, present a clear, concise statement of the purpose for the project TEMP and identify the application system being tested by name. Include a summary of the functions of the system and the tests to be performed. 2.0 BACKGROUND

This section should provide a brief description of the history and other background leading up to the system development process. Identify the user organization and the location where the testing will be performed. Describe any prior testing, and note results that may affect this testing. 3.0 SCOPE

This section describes the projected boundaries of the planned tests. Include a summary of any constraints imposed on the testing, whether they are because of a lack of specialized test equipment, or constraints on time or resources. Describe constraints in greater detail in Section 5.1, Limitations. 4.0 GLOSSARY

This section provides a list of all terms and abbreviations used in this document. If the list is several pages in length, it may be placed as an appendix. 5.0 LIMITATIONS AND TRACEABILITY

This section elaborates on the limitations summarized in Section 3, Scope, and crossreferences the functional requirements and detailed specifications to the tests that demonstrate or partially demonstrate that capability. 5.1 Limitations

This section describes limitations imposed on the testing, whether they are because of a lack of specialized test equipment, or constraints on time or resources. Indicate what steps, if any, are being taken to reduce program risk because of the test limitations(s). 5.2 Traceability (Functional Requirements Traceability Matrix)

This section expands the traceability matrix created in the FRD by including test activities that address user requirements. The matrix begins with the user requirements and assists in tracing how the requirements are addressed in subsequent phases and documents, including the System Design Document and TEMP. The matrix may be broken up into segments, if appropriate. For example, a separate matrix of test plan sections that reference particular sections in the system design document in the Design phase may be provided. The intent is to show that the test plan covers all functionality, performance, and other requirements associated with each design element (unit, module, subsystem, and system) in the system design document. When a test supports a particular requirement, the relationship should be noted at their intersection in the matrix. The listed requirements may be explicitly stated or may be derived or implicit. All explicit requirements must be included. The granularity of the list should be detailed enough that each requirement is simple and testable. 6.0 TEST PLANS

This section describes the levels of tests that take place during development: integration, system security, and user acceptance tests, and the planning that is needed. The test environment is described in terms of milestones, schedules, and resources needed to support testing. Include who is responsible for setting up the test environment, developing test data to be used during the test (if necessary), developing the tests, and performing the tests. 6.1 Test Levels

This section should include a list of the types of software testing to be performed. List all applicable levels and enter "Not applicable" if a particular level of testing does not apply to the project. 6.1.1 Subsystem Integration Test This section discusses the tests that examine the subsystems make up of integrated groupings of software units and modules. This is the first level of testing where problem reports are generated; these reports are classified by severity, and their resolution is monitored and reported. Subsystem integration test results (including the test data sets and outputs produced from the tests) may be delivered as part of the final Test Plan, with the integration test analysis report or as an appendix. 6.1.2 System Test This section describes the type of testing that determines system compliance with standards and satisfaction of functional and technical requirements when executed on target hardware using simulated operational data files and prepared test data. System documents and training manuals are examined for accuracy, validity, completeness, and usability. During this testing period, software performance, response time, and ability to operate under stressed conditions are tested. External system interfaces are also tested. All findings are recorded in a system test analysis report. 6.1.3 User Acceptance Test This section describes the tests performed in a non production environment that mirrors the environment in which the system will be fielded. Every system feature may be tested for correctness and satisfaction of functional requirements. System interoperability, all documentation, system reliability, and the level to which the system meets user requirements is evaluated. Performance tests may be executed to ensure that screen response time, program run time, operator intervention requirements, and reconciliation issues are addressed. 6.1.4 Security Test This section describes the tests performed to determine if the system meets all of the security requirements listed in the FRD. Include internal controls or application security features mentioned in the context of security testing. Security testing is performed in the operational (production) environment under the guidance of the security staff. 6.2 Test Environment and Schedules

This section provides a brief description of the inputs, outputs, and functions of the software being tested. 6.2.1 Software Description

This section lists the software being tested. Provide a description of the purpose of the software being tested, and any interfaces to subsystems or other components of the system. 6.2.2 Milestones This section lists the milestone events and dates for the testing. 6.2.3 Organizations and Locations This section provides information on the participating organizations and the location where the software will be tested . 6.2.4 Schedule This section shows the detailed schedule of dates and events for the testing by location. Events should include familiarization, training, test data set generation, and collections, as well as the volume and frequency of the input for testing. 6.2.5 Resource Requirements This section and associated statements define the resource requirements for the testing. 6.2.5.1 6.2.5.2 Equipment. This section shows the types, and quantities of equipment needed. expected period of use,

Software. This section lists other software needed to support testing that is not part of the software being tested. This should include debugging software and programming aids as well as many current programs to be run in parallel with the new software to ensure accuracy; any drivers or system software to be used in conjunction with the new software to ensure compatibility and integration; and any software required to operate the equipment and record test results.

6.2.5.3 Personnel. This section lists the number of, skill types of, and schedules for personnel - from both the user, database, Quality Assurance, security, and development groups who will be involved in the test. Include any special requirements, such as multiple-shift operation or key personnel. 6.2.6 Testing Material This section describes the documents needed to perform the tests. It could include software, resources, data and other information. 6.2.7 Test Training

This section describes or references the plan for providing training in the use of the software being tested. Specify the types of training, personnel to be trained, and the training staff. 6.2.8 Test Methods and Evaluation This section documents the test methodologies, conditions, test progression or sequencing, data recording, constraints, criteria, and data reduction. 6.2.8.1 Methodology. This section methodology or testing strategy described in this Test Plan. describes for each the type of general testing

6.2.8.2

Conditions. This section specifies the type of input to be used, such as real-time entered test data or canned data for batch runs. It describes the volume and frequency of the input, such as the number of transactions per second tested, etc. Sufficient volumes of test transactions should be used to simulate live stress testing and to incorporate a wide range of valid and invalid conditions. Data values used should simulate live data and also test limited conditions. Test Progression. This section describes the manner in which progression is made from one test to another, so the entire cycle is completed. Data Recording. This section used for recording test results about the testing. describes the method and other information

6.2.8.3

6.2.8.4

6.2.8.5

Constraints. This section indicates anticipated limitations on the test because of test conditions, such as interfaces, equipment, personnel, and databases.

6.2.8.6 Criteria. This section describes the rules to be used to evaluate test results, such as range of data values used, combinations of input types used, or maximum number or allowable interrupts or halts. 6.2.8.7 Data Reduction. This section describes the be used for manipulating the test data into evaluation - such as manual or automated comparison of the results that should be that are produced. 7.0 TEST DESCRIPTION techniques that will a form suitable for methods - to allow produced to those

This section describes each test to be performed. Tests at each level should include verification of access control and system standards, data security, functionality, and error processing. As various levels for testing (subsystem integration, system, user

acceptance testing, and security) are completed and the test results are documented, revisions or increments for the TEMP can be delivered. The subsections of this section should be repealed for each test within the project. If there are many tests, place them in an appendix. 7.1 Test Name

This section identifies the test to be performed for the named module, subsystem, or system and addresses the criteria discussed in the subsequent sections for each test. 7.1.1 Test Description Describes the test to be performed. Tests at each level of testing should include those designed to verify data security, access control, and system standards; system/subsystem/unit functionality; and error processing as required. 7.1.2 Control Describe the test control-such as manual, semiautomatic, or automatic insertion of inputs; sequencing of operations; and recording of results. 7.1.3 Inputs Describe the data input commands used during the test. Provide examples of input data. At the discretion of the Project Manager, input data listings may also be requested in computer readable form for possible future use in regression testing. 7.1.4 Outputs Describe the output data expected as a result of the test and any intermediate messages or display screens that may be produced. 7.1.5 Procedures Specify the step-by-step procedures to accomplish the test. Include test setup, initialization steps, and termination. Also include effectiveness criteria or pass criteria for each test procedure.

APPENDIX C-16 INTERFACE CONTROL DOCUMENT


1.0 SCOPE

This document provides an outline for use in the specification of requirements imposed on one or more system, subsystems, Hardware Configuration Items (HCIs) Computer Software Configuration Items (CSCIs), manual operations, or other system components to achieve one or more interfaces among these entities. The Interface Control Document (ICD) created using this template will define one or more interfaces between two systems. Overall, an ICD can cover requirements for any number of interfaces to a system. Note: If there are multiple interfaces, they can be listed in a single ICD or multiple ICDs as needed. For example: <System1> has an interface with <System2> and <System3>, multiple ICDs can be written describing <System1> to <System2>; <System1> to <System2> - or - a single ICD can include both. In this latter case, each section in this template would be repeated to describe each interface. Sample wording follows: This Interface Control Document (ICD) specifies the interface(s) between <System1> and <System2>, up through <System N>. Upon formal approval by the IRM Manager responsible for each participating system, this ICD shall be incorporated into the requirements baseline for each system. 1.1 System Identification

The following subsections shall contain a full identification of the systems participating in the interface, the contractors who are developing the systems, the interfacing entities, and the interfaces to which this document applies, including, as applicable, identification numbers(s), title(s), abbreviation(s), version number(s), release number(s), or any lower level version descriptors used. A separate paragraph should be included for each system that comprises the interface. 1.1.1 System 1 The information provided in this paragraph should be sufficiently detailed so as to definitively identify the systems participating in the interface, the contractors developing/maintaining the systems, and the IRM Manager. 1.1.2 System 2 The information provided should be similar to that provided in Section 1.1.1 1.2 Document Overview

Sample wording follows: The purpose of the ICD is to specify interface requirements to be met by the participating systems. It describes the concept of operations for the interface, defines the message structure and protocols which govern the interchange of data, and identifies the communication paths along which the data is expected to flow. The document is organized as: Section Section Section Section Section Section 6.0 1.3 1.0 2.0 3.0 Appendices Scope of the Concept of Detailed Interface 4.0 Qualification Document Operations Requirements Methods 5.0 Notes

Applicable Documents

This section shall list the number, title, revision, and date of all documents referenced or used in the preparation of this document. Document types included would be standards, Government documents, and other documents. This section shall also identify the source for all documents not available through company. 2.0 DESCRIPTION

Provide a description of the interface between <System1> and <System 2>. 2.1 System Overview This section should illustrate the interface and the data exchanged between the interfaces. Further information on the functionality and architecture of the participating systems is given the subsequent sections. In particular, each system should be briefly summarized with special emphasis on the functionality related to the interface. The hardware and software components of each system are also identified. 2.1.1 Interface Overview Describe the functionality and architecture of the interfacing system as it relates to the proposed interface. Briefly summarize the system, placing special emphasis on functionality, including identification of key hardware and software components, as they relate to the interface. If more than one external system is to be part of the interface being defined, then include additional sections at this level for each additional external system. 2.2 Functional Allocation

Briefly describe what operations are performed on each system involved in the interface and how the end users will interact with the interface being defined. If the end user does not interact directly with the interface being defined, describe the events that trigger the movement of information using the interface being defined.

2.3

Data Transfer

Briefly describe how data will be moved among component systems of the interface being defined. Include descriptions and diagrams of how connectivity among the systems will be implemented and of the type of messaging or packaging of data that will be used to transfer data among the systems. If more than one interface between these two system is defined by this ICD, each should be identified in this section. A separate subsection (2.4.1, 2.4.2 etc.) may be included for each interface. This ICD template will be primarily used for specification of interfaces that move information between two systems. Where an interface contains physical requirements that are imposed upon one or both of the interfacing systems, these physical requirements should be described in Section 2.4, and defined in Section 3.1.5, Physical Requirements. If an interface has no physical requirements, then so state. 2.4 Transactions

Briefly describe the types of transactions that will be utilized to move data among the component systems of the interface being defined. If multiple types of transactions will be utilized for different portions of the interface, a separate section may be included for each interface. 2.5 Security and Integrity

If the interface defined has security and integrity requirements, briefly describe how access security will be implemented and how data transmission security will be implemented for the interface being defined. Include a description of the transmission medium to be used and whether it is a public or a secure line. Include a brief description of how data will be protected during transmission and how data integrity will be guaranteed. Include a description of how the tow systems can be certain that they are communicating with each other and not with another system masquerading as one of them. Describe how an individual on one system can be audited and held accountable for resulting actions on the other component of the interface. Normally, this section should be an overview of how security and integrity will be implemented, with Section 3.1.4 contains a detailed description of specified requirements. An interface that is completely self-contained, such as movement of data between systems resident in the same computer room, may not have any security requirements. In this case, it should be so stated with an explanation of why the interface has no security and integrity requirements. 3.0 DETAILED INTERFACE REQUIREMENTS

This section specifies the requirements for one or more interfaces between two systems. This includes explicit definitions of the content and format of every message or file that may pass between the two systems and the conditions under which each message or file is to be sent. If an interface between the two systems is t be implemented incrementally, identify the implementation phase in which each message will be available The structure in paragraph 3.1 should be replicated for each interface between the two participating systems.

The template contained in Section 3.1 including subsections, provides a generic approach to interface requirements definition. The specific interface definition should include only subsections relevant to the interface being defined and considerable liberty may be taken in the organization of Section 3.1 subsections. Where types of information not specified in Section 3.1 are required to clearly define the interface, additional subsections should be added. Other readily available documents (such as data dictionaries, standards for communication protocols, and standards for user interfaces) may reference in place of stating the information here. It may useful to include copies of such documents as attachments to the ICD. Where possible, the use of tables and figures is encouraged to enhance the understandability of the interface definition. In defining interface requirements, clearly state which of the interfacing systems the requirement is being imposed upon. Note: For ease of updates and understanding systems with multiple interfaces, this section may be included as an Appendix to the ICD rather than as a section of the ICD. 3.1 Interface 1 Requirements

Briefly summarize the interface. Indicate what data protocol, communication methods and processing priority are used by the interface. Data protocols used may include messages and custom ASCII files. Communication methods may include electronic networks or magnetic media. Indicate processing priority if information is required to be formatted and communicated as the data is created, as a batch of data is created by operator action, or in accordance with some periodic schedule. Requirements for specific messages or files to be delivered within a set interval of time should be included in Paragraphs 3.1.1 or 3.1.2. 3.1.1 Interface Processing Time Requirements If information is required to be formatted and communicated as the data is created, as a batch of data is created by operator action, or in accordance with some periodic schedule, indicate processing priority. Requirements for specific messages or files to be delivered to the communication medium within a set interval of time should be included in Subsection 3.1.2. State the priority that the interfacing entities must assign to the interface. Priority may be stated as performance or response time requirements defining how quickly incoming traffic or data requests must be processed by the interfacing system to meet the requirements of the interface. Considerable latitude should be given in defining performance requirements to account for differences in hardware and transaction volumes at different installation sites of the interfacing systems. Response time requirements, which are impacted by resources and beyond the control of the interfacing systems (i.e., communication networks), are beyond the scope of an ICD. 3.1.2 Message (or File) Requirements This subsection specifies the requirements for one or more interfaces between two systems. This includes explicit definitions of and the conditions under which each message is to be sent. The definition, characteristics and attributes of the command are described.

3.1.2.1

Data Assembly Characteristics. Use the following paragraphs to define the content and format of every message, file, or other data element assembly (records, arrays, displays, reports, etc.) Specified in Paragraph 3.1.2. In defining interfaces where data is moved among systems, define the packaging of data to be utilized. The origin, structure, and processing of such packets will be dependent on the techniques used to implement the interface. Define required characteristics of data element assemblies that the interfacing entities must provide, store, send, access, receive, etc.

When relevant to the packaging technique used, the following information should be provided: Names/identifiers Project-unique identifier Non-technical (natural language) name Technical name (e.g., record or data structure name in code or database) Abbreviations or synonymous names Structure of data element assembly

Visual and auditory characteristics of displays and other outputs (such as colors, layouts, fonts, icons and other display elements, beeps, lights) where relevant Relationships among different types of data element assemblies used for the interface Priority, timing, frequency, volume, sequencing, and other constraints, such as whether the assembly may be updated and whether business rules apply Sources (setting/sending entities) and recipients (using/receiving entities) 3.1.2.2 Field/Element Definition. Define the characteristics of individual data elements that comprise the data packets defined in Section 3.1.2.1. Sections 3.1.2.1 and 3.1.2.2 may be combined into one section in which the data packets and their component data elements are defined in a single table. Data element definitions should include only features relevant to the interface being defined and may include such features as:

Names/identifiers Project-unique identifier

Priority, timing, frequency, volume, sequencing, and other constraints, such as whether the data element may be updated and whether business rules apply Company standard data element name Non-technical (natural-language) name Technical name (e.g. variable or field name in code or database) Abbreviation or synonymous names Data type (alphanumeric, integer, etc.) Size and format (such as length and punctuation of a character string) Units of measurement (such as meters, dollars, nanoseconds) Range or enumeration of possible values (such as 0-99) Accuracy (how correct) and precision (number of significant digits) Security and privacy constraints Sources (setting/sending entitles) and recipients (using/receiving entities)

3.1.3 Communication Methods Communication requirements include all aspects of the presentation, session, network and data layers of the communication stack to which both systems participating in the interface must conform. The following subsections should be included in this discussion as appropriate to the interface being defined and may be supplemented by additional information as appropriate. 3.1.3.1 Interface Initiation. Define the sequence of events by which the connections between the participating systems will be initiated. Include the minimum and maximum number of conceptions that may be supported by the interface. Also include availability requirements for the interface (e.g., 24 hours a day, 7 days a week) that are dependent on the interfacing systems. Availability requirements beyond the control of the interfacing systems, such as network availability, are beyond the scope of an ICD. 3.1.3.2 Flow Control. Specify the sequence numbering, legality checks, error control and recovery procedures that will be used to manage the interface. Include any acknowledgment (ACK/NAK) messages related to these procedures. 3.1.4 Security Requirements

Specify the security features that are required to be implemented within the message or file structure or in the communications processes. Security of the communication methods used (include safety/security/privacy considerations, such as encryption, user authentication, compartmentalization, and auditing). For interactive interfaces, security features may include identification, authentication, encryption and auditing. Simple message broadcast or ASCII file transfer interfaces are likely to rely on features provided by communication services. Do not specify the requirements for features that are not provided by the systems to which the ICD applies. If the interface relies solely on physical security or on the security of the networks and firewalls through which the systems are connected, so state. 3.2 Interface 2 Requirements When more than one interface between two systems is being defined in a single ICD, each should be defined separately, including all of the characteristics described in Section 3.1 for each. There is no limit on the number of unique interfaces that can be defined in a single Interface Control Document. In general, all interfaces defined should involve the same two systems. 4.0 QUALIFICATION METHODS

This section defines a set of qualification methods to be used to verify that the requirements for the interfaces defined in Section 3 have been met. Qualification methods include: Demonstration - The operation of interfacing entities that relies on observable functional operation not requiring the use of instrumentation, special test equipment, or subsequent analysis. Test - The operation of interfacing entities using instrumentation or special test equipment to collect data for later analysis. Analysis - The processing of accumulated data obtained from other qualification methods. Examples are reduction, interpretation, or extrapolation of test results. Inspection - The visual examination of interfacing entities, documentation, etc. Special qualification methods - Any special qualification methods for the interfacing entities, such as special tools, techniques, procedures, facilities, and acceptance limits. 5.0 NOTES

This section contains any general information that aids in understanding the document (e.g., background information, glossary). 6.0 APPENDICES

Appendices may be used to provide information published separately for convenience in document maintenance (e.g. , acronyms, abbreviations).

7.0

APPROVALS

A page shall be included in the Interface Control Document (ICD) for signature of those individuals who have agreed to the interface defined in the ICD. At a minimum, the signatures of the IRM Managers for the two systems that will be interfacing are required. Sample wording and suggestions or other sign-offs that may be included follow: Approvals for the Interface Control Document for Interfaces between System 1 and System 2 , Version Number. System 1 IRM Manager ________________________________ Date_____________ Printed name and title ___________________________________________________ System 2 IRM Manager ________________________________ Date_____________ Printed name and title ___________________________________________________ System 1 Configuration ______________________Date_____________ Control Board

Printed name and title ___________________________________________________ System 2 Configuration ______________________Date_____________ Control Board

Printed name and title ___________________________________________________ 8.0 RECORD OF CHANGES

This record is maintained throughout the life of the document and summarizes the changes between approved versions of this document. Each new version of the document submitted for approval receives a sequential venison number. For instance, the first version of the document will be revision number 1.0. The old paragraph will designate the paragraph number and title where the information existed in the previous document if applicable. The revision comments will contain an explanation of the changes made and any new paragraph number and title if needed.

APPENDIX C-17 SECURITY RISK ASSESSMENT


Executive Summary Provide information which would support the rationale for development of this system. 1.0 BACKGROUND

Provide brief history which may have lead to the development of this system. 2.0 PURPOSE

The purpose of the risk assessment is to assess the system's use of resources and controls (implemented and planned) to eliminate and/or manage vulnerabilities that are exploitable by threats to the Department. It will also identify any of the following possible vulnerabilities: risks associated with the system operational configuration; system's safeguards, threats and vulnerabilities;

new threats and risks that might exist and, therefore, will need to be addressed after the current system is replaced; and to review the system relative to its conformance with Company order, Telecommunications and Automated Information System Security Manual.

The risk assessment is a determination of vulnerabilities that, if exploited, could result in the following: Unauthorized disclosure of sensitive information, including information falling within the purview of the Privacy Act of 1974; 3.0 Unauthorized modification of the system or its data; Denial of system service or access to data to authorized users. SCOPE

The scope of this risk assessment is limited to the system with its present configuration. The scope also includes those physical, environmental, personnel, telecommunications, and administrative security services provided. Included within the scope are:

4.0

Hardware Software Firmware Data Operating procedures ASSUMPTIONS

The system design, and operating procedures are required to respond and conform to the information system security requirements prescribed by the Order. The system at the (place the number of different sites) should be identically configured according to a s\common hardware/software design, which is controlled under centralized configuration management procedures. 5.0 DESCRIPTION OF SYSTEM

Provide a description of the system to include hardware, software, firmware and any telecommunications involved in the operations of this system. 5.1 System Attributes

Provide a description of the system security attributes to include hardware, software, firmware and any telecommunications involved in the operations of this system. 5.2 System Sensitivity

The system handles information that is considered sensitive but unclassified (SBU), which must be protected, as required by P.L. 100-235, 8 January 1988, (Computer Security Act of 1987). 6.0 SYSTEMS SECURITY

System security includes technical security, personnel security, physical security, environmental security, administrative security, and information (data) security. 6.1 Administrative Security

Administrative policies, and procedures provide employees with information about their responsibilities as users. These are the written guidelines that employees must follow as they use the system in the performance of their duties. Training helps employees learn how to use the system and reminds them of their responsibilities to safeguard the system.

6.2

Physical Security

Physical Security at the facility will not be directly impacted by the systems. Hardware, software and data are contained within the current controlled areas of each facility. The system is in compliance with the current policies and procedures covering physical access to buildings, computer rooms/areas and human resources in the facilities. 6.3 Technical Security - Hardware/Equipment Security

All equipment will be located within a locked, limited access room. 6.4 Software Security

Systemic Computer Security for the system is provided the software security within the application. Controls Single Access Control Permissions Resource User System User Log-on Control

and

File

Permissions Permissions Permissions Protection Protection Protection Protection Features passwords encryption

Protection Data File System Boot and Format Object Reuse Virus changeable

1.04 Miscellaneous 1.01 User 1.02 Data 1.03 Audit trails 6.5 Telecommunications Security

Telecommunications Security requirements for the system does not currently apply for the following reasons: The system is a standalone PC system that is not connected to any local, wide, or global area network or to any other system.

6.6

Personnel Security

All system analysts have undergone the usual background investigation. Only persons having duty assignments will be granted access to the computer programs, audit trail files, or any media associated with the system. 7.0 SYSTEM VULNERABILITY ASSESSMENT

Vulnerability assessment is a key component of a risk assessment, intended to identify system vulnerabilities and determine the likelihood of exploitation of those vulnerabilities. Once vulnerabilities are identified, a systematic approach is taken to reduce these risks to an acceptable level. The implementation of countermeasures or modification of the system design, must be appraised and planned for as part of the acceptance of identified risks. 7.1 Technical Vulnerability

Provide a brief description of any technical vulnerabilities. Countermeasure Provide a brief description of the countermeasure for the vulnerabilities listed above. 7.2 Personnel Vulnerability

Provide a brief description of any personnel vulnerabilities. Countermeasure Provide a brief description of the countermeasure for the vulnerabilities listed above 7.3 Telecommunication Vulnerability

Provide a brief description of any telecommunication vulnerabilities. Countermeasure Provide a brief description of the countermeasures for the vulnerabilities listed above. 7.4 Software Vulnerability

Provide a brief description of any software vulnerabilities. Countermeasure Provide a brief description of the countermeasures for the vulnerabilities listed above. 7.5 Environmental Vulnerability

Provide a brief description of any environmental vulnerabilities. Countermeasure Provide a brief description of the countermeasures for the vulnerabilities listed above. 7.6 Physical Vulnerability

Provide a brief description of any physical vulnerabilities. Countermeasure Provide a brief description of the countermeasures for the vulnerabilities listed above.

APPENDIX C-18 CONVERSION PLAN


The Conversion Plan describes the strategies involved in converting data from an existing system to another hardware or software environment. It is appropriate to reexamine the original system's functional requirements for the condition of the system before conversion to determine if the original requirements are still valid. An outline of the Conversion Plan is shown below. 1.0 INTRODUCTION

This section provides a brief description of introductory material. 1.1 Purpose and Scope

This section describes the purpose and scope of the Conversion Plan. Reference the information system name and provide identifying information about the system undergoing conversion. 1.2 Points of Contact

This section identifies the System Proponent. Provide the name of the responsible organization and staff (and alternates, if appropriate) who serve as points of contact for the system conversion. Include telephone numbers of key staff and organizations. 1.3 Project References

This section provides a bibliography of key project references and deliverables that have been produced before this point in the project development. These documents may have been produced in a previous development life cycle that resulted in the initial version of the system undergoing conversion or may have been produced in the current conversion effort as appropriate. 1.4 Glossary

This section contains a glossary of all terms and abbreviations used in the plan. If it is several pages in length, it may be placed in an appendix. 2.0 CONVERSION OVERVIEW

This section provides an overview of the aspects of the conversion effort, which are discussed in the subsequent sections. 2.1 System Overview

This section provides an overview of the system undergoing conversion. The general nature or type of system should be described, including a brief overview of the processes the system is intended to support. If the system is a database or an

information system, also include a general discussion of the type of data maintained, the operational sources, and the uses of those data. 2.2 System Conversion Overview

This section provides an overview of the planned conversion effort. 2.2.1 Conversion Description This section provides a description of the system structure and major components. If only selected parts of the system will undergo conversion, identify which components will and will not be converted. If the conversion process will be organized into discrete phases, this section should identify which components will undergo conversion in each phase. Include hardware, software, and data as appropriate. Charts, diagrams, and graphics may be included as necessary. Develop and continuously update a milestone chart for the conversion process. 2.2.2 Type of Conversion This section describes the type of conversion effort. The software part of the conversion effort usually falls into one of the following categories: Intralanguage conversion is a conversion between different versions of the same computer language or different versions of a software system, such as a database management system (DBMS), operating system, or local area network (LAN) management system. Interlanguage conversion is the conversion from one computer language to another or from one software system to another. Same compiler conversions use the same language and compiler versions. Typically, these conversions are performed to make programs conform to standards, improve program performance, convert to a new system concept, etc. These conversions may require some program redesign and generally require some reprogramming.

In addition to the three categories of conversions described above, other types of conversions may be defined as necessary. 2.2.3 Conversion Strategy This section describes the strategies for conversion of system hardware, software, and data. 2.2.3.1 Hardware Conversion Strategy. This section describes the strategy to be used for the conversion of system hardware, if any. Describe the new (target) hardware environment, if appropriate.

2.2.3.2 2.2.3.3

Software Conversion Strategy. This the conversion strategy to be used for software.

section

describes the and

Data Conversion Strategy. This section describes data conversion strategy, data quality assurance, the data conversion controls.

2.2.3.4

Data Conversion Approach. This section describes the specific data preparation requirements and the data that must be available for the system conversion. If data will be transported from the original system, provide a detailed description of the data handling, conversion, and loading procedures. If these data will be transported using machinereadable media, describe the characteristics of those media. Interfaces. In the case of a hardware platform conversion--such as mainframe to client/server--the interfaces to other systems may need reengineering. This section describes the affected interfaces and the revisions required in each. Data Quality Assurance and Control. This section describes the strategy to be used to ensure data quality before and after all data conversions. This section also describes the approach to data scrubbing and quality assessment of data before they are moved to the new or converted system. The strategy and approach may be described in a formal transition plan or a document if more appropriate.

2.2.3.5

2.2.3.6

2.2.4 Conversion Risk Factors This section describes the major risk factors in the conversion effort and strategies for their control or reduction. Descriptions of the risk factors that could affect the conversion feasibility, the technical performance of the converted system, the conversion schedule, or costs should be included. In addition, a review should be made to ensure that the current backup and recovery procedures are adequate as well as operational. 2.3 Conversion Tasks

This section describes the major tasks associated with the conversion, including planning and preconversion tasks. 2.3.1 Conversion Planning. This section describes planning for the conversion effort. If planning and related issues have been addressed in other life-cycle documents, reference those documents in this section. The following list provides some examples of conversion planning issues that could be addressed:

Analysis of the workload projected for the target conversion environment to ensure that the projected environment can adequately handle that workload and meet performance and capacity requirements Projection of the growth rate of the data processing needs in the target environment to ensure that the system can handle the projected near-term growth, and that it has the expansion capacity for future needs Analysis to identify missing features in the new (target) hardware and software environment that were supported in the original hardware and software and used in the original system Development of a strategy for recoding, reprogramming, or redesigning the components of the system that used hardware and software features not supported in the new (target) hardware and software environment but used in the original system

2.3.2 Preconversion Tasks This section describes all tasks that are logically separate from the conversion effort itself but that must be completed before the initiation, development, or completion of the conversion effort. Examples of such preconversion tasks include: Finalize decisions regarding the type of conversion to be pursued.

Install changes to the system hardware, such as a new computer or communications hardware, if necessary. Implement changes to the computer operating system or operating system components, such as the installation of a new LAN operating system or a new windowing system. or Acquire and install other software for the new environment, such a new DBMS document imaging system. 2.3.3 Major Tasks and Procedures This section addresses the major tasks associated with the conversion and the procedures associated with those tasks. 2.3.3.1 Major Task Name. Provide a name for each major task. Provide a brief description of each major task required for the conversion of the system, including the tasks required to perform the conversion, preparation of data, and testing of the system. If some of these tasks are described in other life-cycle documents, reference those documents in this section.

2.3.3.2

Procedures. This section should describe approach for each major task. Provide as necessary to describe these procedures. Conversion Schedule

the procedural much detail as

2.4

This section provides a schedule of activities to be accomplished during the conversion. Preconversion tasks and major tasks for all hardware, software, and data conversions described in Section 2.3,Conversion Tasks, should be described here and should show the beginning and end dates of each task. Charts may be used as appropriate. 2.5 Security

If appropriate for the system to be implemented, provide an overview of the system security features and the security during conversion. 2.5.1 System Security Feature The description of the system security features, if provided, should contain a brief overview and discussion of the security features that will be associated with the system when it is converted. Reference other life-cycle documents as appropriate. Describe the changes in the security features or performance of the system that would result from the conversion. 2.5.2 Security During Conversion This section addresses security issues specifically related to the conversion effort. 3.0 CONVERSION SUPPORT

This section describes the support necessary to implement the system. If there are additional support requirements not covered by the categories shown here, add other subsections as needed. 3.1 Hardware

This section lists support equipment, including all hardware to be used for the conversion. 3.2 Software

This section lists the software and databases required to support the conversion. It describes all software tools used to support the conversion effort, including the following types of software tools, if used: Automated conversion tools, such as software translation tools for translating among different computer languages or translating within software families (such as,

between release versions of compilers and DBMSs) Automated data conversion tools for translating among data storage formats associated with the different implementations (such as, different DBMSs or operating systems) Quality assurance and validation software for the data conversion that are automated testing tools Computer-aided software engineering (CASE) tools for reverse engineering of the existing application CASE tools for capturing system design information and presenting it graphically Documentation tools such as cross-reference lists and data attribute generators

Commercial off-the-shelf software and software written specifically for the conversion effort 3.3 Facilities

This section identifies the physical facilities and accommodations required during the conversion period. 3.4 Materials

This section lists support materials. 3.5 Personnel

This section describes personnel requirements and any known or proposed staffing, if appropriate. Also describe the training, if any, to be provided for the conversion staff. 3.5.1 Personnel Requirements and Staffing This section describes the number of personnel, length of time needed, types of skills, and skill levels for the staff required during the conversion period. 3.5.2 Training of Conversion Staff This section addresses the training, if any, necessary to prepare the staff for converting the system. It should provide a training curriculum, which lists the courses to be provided, a course sequence, and a proposed schedule. If appropriate, it should identify by job description which courses should be attended by particular types of

staff. Training for users in the operation of the system is not presented in this section, but is normally included in the Training Plan.

APPENDIX C-19 SYSTEMS DESIGN DOCUMENT


1.0 1.1 INTRODUCTION Purpose and Scope

This section provides a brief description of the Systems Design Document's purpose and scope. 1.2 Project Executive Summary

This section provides a description of the project from a management perspective and an overview of the framework within which the conceptual system design was prepared. If appropriate, include the information discussed in the subsequent sections in the summary. 1.2.1 System Overview This section describes the system in narrative form using non-technical terms. It should provide a high-level system architecture diagram showing a subsystem breakout of the system, if applicable. The high-level system architecture or subsystem diagrams should, if applicable, show interfaces to external systems. Supply a highlevel context diagram for the system and subsystems, if applicable. Refer to the requirements traceability matrix (RTM) in the Functional Requirements Document (FRD), to identify the allocation of the functional requirements into this design document. 1.2.2 Design Constraints This section describes any constraints in the system design (reference any trade-off analyses conducted such, as resource use versus productivity, or conflicts with other systems) and includes any assumptions made by the project team in developing the system design. 1.2.3 Future Contingencies This section describes any contingencies that might arise in the design of the system that may change the development direction. Possibilities include lack of interface agreements with outside agencies or unstable architectures at the time this document is produced. Address any possible workarounds or alternative plans. 1.3 Document Organization

This section describes the organization of the Systems Design Document.

1.4

Points of Contact

This section provides the organization code and title of the key points of contact (and alternates if appropriate) for the information system development effort. These points of contact should include the Project Manager, System Proponent, User Organization, Quality Assurance (QA) Manager, Security Manager, and Configuration Manager, as appropriate. 1.5 Project References

This section provides a bibliography of key project references and deliverables that have been produced before this point. For example, these references might include the Project Management Plan, Feasibility Study, CBA, Acquisition Plan, QA Plan, CM Plan, FRD, and ICD. 1.6 Glossary

Supply a glossary of all terms and abbreviations used in this document. If the glossary is several pages in length, it may be included as an appendix. 2.0 SYSTEM ARCHITECTURE

In this section, describe the system and/or subsystem(s) architecture for the project. References to external entities should be minimal, as they will be described in detail in Section 6, External Interfaces. 2.1 System Hardware Architecture

In this section, describe the overall system hardware and organization. Include a list of hardware components (with a brief description of each item) and diagrams showing the connectivity between the components. If appropriate, use subsections to address each subsystem. 2.2 System Software Architecture

In this section, describe the overall system software and organization. Include a list of software modules (this could include functions, subroutines, or classes), computer languages, and programming computer-aided software engineering tools (with a brief description of the function of each item). Use structured organization diagrams/object-oriented diagrams that show the various segmentation levels down to the lowest level. All features on the diagrams should have reference numbers and names. Include a narrative that expands on and enhances the understanding of the functional breakdown. If appropriate, use subsections to address each module. Note: The diagrams should map to the FRD data flow diagrams, providing the physical process and data flow related to the FRD logical process and data flow.

2.3

Internal Communications Architecture

In this section, describe the overall communications within the system; for example, LANs, buses, etc. Include the communications architecture(s) being implemented, such as X.25, Token Ring, etc. Provide a diagram depicting the communications path(s) between the system and subsystem modules. If appropriate, use subsections to address each architecture being employed. Note: The diagrams should map to the FRD context diagrams. 3.0 FILE AND DATABASE DESIGN

Interact with the Database Administrator (DBA) when preparing this section. The section should reveal the final design of all database management system (DBMS) files and the non-DBMS files associated with the system under development. Additional information may add as required for the particular project. Provide a comprehensive data dictionary showing data element name, type, length, source, validation rules, maintenance (create, read, update, delete (CRUD) capability), data stores, outputs, aliases, and description. Can be included as an appendix. 3.1 Database Management System Files

This section reveals the final design of the DBMS files and includes the following information, as appropriate (refer to the data dictionary): Refined logical model; provide normalized table layouts, entity relationship diagrams, and other logical design information A physical description of the DBMS schemas, sub-schemas, records, sets, tables, storage page sizes, etc. Access methods (such as indexed, via set, sequential, random access, sorted pointer array, etc.) Estimate of the DBMS file size or volume of data within the file, and data pages, including overhead resulting from access methods and free space Definition of the update frequency of the database tables, views, files, areas, records, sets, and data pages; estimate the number of transactions if the database is an online transaction-based system 3.2 Non-Database Management System Files

In this section, provide the detailed description of all non-DBMS files and include a narrative description of the usage of each file--including if the file is used for input, output, or both; if this file is a temporary file; an indication of which modules read and write the file, etc.; and file structures (refer to the data dictionary). As appropriate, the file structure information should:

Identify record structures, record keys or indexes, and reference data elements within the records Define record length (fixed or maximum variable length) and blocking factors Define file access method--for example, index sequential, virtual sequential, random access, etc. Estimate the file size or volume of data within the file, including overhead resulting from file access methods Define the update frequency of the file; if the file is part of an online transactionbased system, provide the estimated number of transactions per unit time, and the statistical mean, mode, and distribution of those transactions 4.0 HUMAN-MACHINE INTERFACE

This section provides the detailed design of the system and subsystem inputs and outputs relative to the user/operator. Any additional information may be added to this section and may be organized according to whatever structure best presents the operator input and output designs. Depending on the particular nature of the project, it may be appropriate to repeat these sections at both the subsystem and design module levels. Additional information may be added to the subsections if the suggested lists are inadequate to describe the project inputs and outputs. 4.1 Inputs

This section is a description of the input media used by the operator for providing information to the system; show a mapping to the high-level data flows described in Section 1.2.1, System Overview. For example, data entry screens, optical character readers, bar scanners, etc. If appropriate, the input record types, file structures, and database structures provided in Section 3, File and Database Design, may be referenced. Include data element definitions, or refer to the data dictionary. Provide the layout of all input data screens or graphical user interfaces (GUIs) (for example, windows). Provide a graphic representation of each interface. Define all data elements associated with each screen or GUI, or reference the data dictionary. This section should contain edit criteria for the data elements, including specific values, range of values, mandatory/optional, alphanumeric values, and length. Also address data entry controls to prevent edit bypassing. Discuss the miscellaneous messages associated with operator inputs, including the following: Copies of form(s) if the input data are keyed or scanned for data entry from printed forms Description of any access restrictions or security considerations Each transaction name, code, and definition, if the system is a transaction-based

processing system Incorporation of the Privacy Act statement into the screen flow, if the system is covered by the Privacy Act 4.2 Outputs

This section describes of the system output design relative to the user/operator; show a mapping to the high-level data flows described in Section 1.2.1. System outputs include reports, data display screens and GUIs, query results, etc. The output files are described in Section 3 and may be referenced in this section. The following should be provided, if appropriate: Identification of codes and names for reports and data display screens Description of report and screen contents (provide a graphic representation of each layout and define all data elements associated with the layout or reference the data dictionary) Description of the purpose of the output, including identification of the primary users Report distribution requirements, if any (include frequency for periodic reports) Description of any access restrictions or security considerations 5.0 DETAILED DESIGN

This section provides the information needed for a system development team to actually build and integrate the hardware components, code and integrate the software modules, and interconnect the hardware and software segments into a functional product. Additionally, this section addresses the detailed procedures for combining separate COTS packages into a single system. Every detailed requirement should map back to the FRD, and the mapping should be presented in an update to the RTM and include the RTM as an appendix to this design document. 5.1 Hardware Detailed Design

A hardware component is the lowest level of design granularity in the system. Depending on the design requirements, there may be one or more components per system. This section should provide enough detailed information about individual component requirements to correctly build and/or procure all the hardware for the system (or integrate COTS items). If there are many components or if the component documentation is extensive, place it in an appendix or reference a separate document. Add additional diagrams and information, if necessary, to describe each component and its functions, adequately. Industry-standard component specification practices should be followed. For COTS procurements, if a specific vendor has been identified, include appropriate item names. Include the following information in the detailed component designs (as applicable):

Power input requirements for each component Signal impedances and logic states

Connector specifications (serial/parallel, 11-pin, male/female, etc.) Memory and/or storage space requirements Processor requirements (speed and functionality) Graphical representation depicting the number of hardware items (for example, monitors, printers, servers, I/O devices), and the relative positioning of the components to each other Cable type(s) and length(s) User interfaces (buttons, toggle switches, etc.) Hard drive/floppy drive/CD-ROM requirements Monitor resolution 5.2 Software Detailed Design

A software module is the lowest level of design granularity in the system. Depending on the software development approach, there may be one or more modules per system. This section should provide enough detailed information about logic and data necessary to completely write source code for all modules in the system (and/or integrate COTS software programs). If there are many modules or if the module documentation is extensive, place it in an appendix or reference a separate document. Add additional diagrams and information, if necessary, to describe each module, its functionality, and its hierarchy. Industrystandard module specification practices should be followed. Include the following information in the detailed module designs: A narrative description of each module, its function(s), the conditions under which it is used (called or scheduled for execution), its overall processing, logic, interfaces to other modules, interfaces to external systems, security requirements, etc.; explain any algorithms used by the module in detail For COTS packages, specify any call routines or bridging programs to integrate the package with the system and/or other COTS packages (for example, Dynamic Link Libraries) Data elements, record structures, and file structures associated with module input and output Graphical representation of the module processing, logic, flow of control, and algorithms, using an accepted diagramming approach (for example, structure charts, action diagrams, flowcharts, etc.)

Data entry and data output graphics; define or reference associated data elements; if the project is large and complex or if the detailed module designs will be incorporated into a separate document, then it may be appropriate to repeat the screen information in this section Report layout 5.3 Internal Communications Detailed Design

If the system includes more than one component there may be a requirement for internal communications to exchange information, provide commands, or support input/output functions. This section should provide enough detailed information about the communication requirements to correctly build and/or procure the communications components for the system. Include the following information in the detailed designs (as appropriate): The number of servers and clients to be included on each area network Specifications for bus timing requirements and bus control Format(s) for data being exchanged between components Graphical representation of the connectivity between components, showing the direction of data flow (if applicable), and approximate distances between components; information should provide enough detail to support the procurement of hardware to complete the installation at a given location LAN topology 6.0 EXTERNAL INTERFACES

External systems are any systems that are not within the scope of the system under development, regardless whether the other systems are managed by the company or another agency. In this section, describe the electronic interface(s) between this system and each of the other systems and/or subsystem(s), emphasizing the point of view of the system being developed. 6.1 Interface Architecture

In this section, describe the interface(s) between the system being developed and other systems; for example, batch transfers, queries, etc. Include the interface architecture(s) being implemented, such as wide area networks, gateways, etc. Provide a diagram depicting the communications path(s) between this system and each of the other systems, which should map to the context diagrams in Section 1.2.1. If appropriate, use subsections to address each interface being implemented.

6.2

Interface Detailed Design

For each system that provides information exchange with the system under development, there is a requirement for rules governing the interface. This section should provide enough detailed information about the interface requirements to correctly format, transmit, and/or receive data across the interface. Include the following information in the detailed design for each interface (as appropriate): The data format requirements; if there is a need to reformat data before they are transmitted or after incoming data is received, tools and/or methods for the reformat process should be defined Specifications for hand-shaking protocols between the two systems; include the content and format of the information to be included in the hand-shake messages, the timing for exchanging these messages, and the steps to be taken when errors are identified Format(s) for error reports exchanged between the systems; should address the disposition of error reports; for example, retained in a file, sent to a printer, flag/alarm sent to the operator, etc. Graphical representation of the connectivity between systems, showing the direction of data flow Query and response descriptions If a formal ICD exists for a given interface, the information can be copied, or the ICD can be referenced in this section. 7.0 SYSTEM INTEGRITY CONTROLS

Sensitive ADP systems use information for which the loss, misuse, modification of, or unauthorized access to that information could affect the national interest, the conduct of Federal programs, or the privacy to which individuals are entitled under Section 552a of Title 5, U.S. Code, but that has not been specifically authorized under criteria established by an Executive Order or an act of Congress to be kept classified in the interest of national defense or foreign policy. Developers of sensitive ADP systems are required to develop specifications for the following minimum levels of control: Internal security to restrict access of critical data items to only those access types required by users

Audit procedures to meet control, reporting, and retention period requirements for operational and management reports

Application audit trails to dynamically audit retrieval access to designated critical data Standard Tables to be used or requested for validating data fields Verification processes for additions, deletions, or updates of critical data Ability to identify all audit information by user identification, network terminal identification, date, time, and data accessed or changed

APPENDIX C-20 IMPLEMENTATION PLAN


The Implementation Plan describes how the information system will be deployed, installed and transitioned into an operational system. The plan contains an overview of the system, a brief description of the major tasks involved in the implementation, the overall resources needed to support the implementation effort (such as hardware, software, facilities, materials, and personnel), and any site-specific implementation requirements. The plan is developed during the Design Phase and is updated during the Development Phase; the final version is provided in the Integration and Test Phase and is used for guidance during the Implementation Phase. The outline shows the structure of the Implementation Plan. 1.0 INTRODUCTION

This section provides an overview of the information system and includes any additional information that may be appropriate. 1.1 Purpose

This section describes the purpose of the Implementation Plan. Reference the system name and identify information about the system to be implemented. 1.2 System Overview

This section provides a brief overview of the system to be implemented, including a description of the system and its organization. 1.2.1 System Description This section provides an overview of the processes the system is intended to support. If the system is a database or an information system, provide a general discussion of the description of the type of data maintained and the operational sources and uses of those data. 1.2.2 System Organization This section provides a brief description of system structure and the major system components essential to the implementation of the system. It should describe both

hardware and software, as appropriate. Charts, diagrams, and graphics may be included as necessary. 1.3 Project References

This section provides a bibliography of key project references and deliverables that have been produced before this point in the project development. For example, these references might include the Project Plan, Acquisition Plan, FRD, Test Plan, Conversion Plan, and System Design Document. 1.4 Glossary

Provide a glossary of all terms and abbreviations used in the manual. If it is several pages in length, it may be placed in an appendix. 2.0 MANAGEMENT OVERVIEW

The subsequent sections provide a brief description of the implementation and major tasks involved in this section. 2.1 Description of Implementation

This section provides a brief description of the system and the planned deployment, installation, and implementation approach. 2.2 Points of Contact

In this section, identify the System Proponent, the name of the responsible organization(s), and titles and telephone numbers of the staff who serve as points of contact for the system implementation. These points of contact could include the Project Manager, Program Manager, Security Manager, Database Administrator, Configuration Management Manager, or other managers with responsibilities relating to the system implementation. The site implementation representative for each field installation or implementation site should also be included, if appropriate. List all managers and staff with whom the implementation must be coordinated. 2.3 Major Tasks

This section provides a brief description of each major task required for the implementation of the system. Add as many subsections as necessary to this section to describe all the major tasks adequately. The tasks described in this section are not site-specific, but generic or overall project tasks that are required to install hardware and software, prepare data, and verify the system. Include the following information for the description of each major task, if appropriate: What the task will accomplish Resources required to accomplish the task Key person(s) responsible for the task Criteria for successful completion of the task Examples of major tasks are the following:

Providing overall planning and coordination for the implementation Providing appropriate training for personnel Ensuring that all manuals applicable to the implementation effort are available when needed Providing all needed technical assistance Scheduling any special computer processing required for the implementation Performing site surveys before implementation Ensuring that all prerequisites have been fulfilled before the implementation date Providing personnel for the implementation team Acquiring special hardware or software Performing data conversion before loading data into the system Preparing site facilities for implementation 2.4 Implementation Schedule

In this section, provide a schedule of activities to be accomplished during implementation. Show the required tasks (described in Section 2.3, Major Tasks) in chronological order, with the beginning and end dates of each task. 2.5 Security

If appropriate for the system to be implemented, include an overview of the system security features and requirements during the implementation. If the system is covered by the Privacy Act, provide Privacy Act concerns. 2.5.1 System Security Features In this section, provide an overview and discussion of the security features that will be associated with the system when it is implemented. It should include the primary security features associated with the system hardware and software. Security and protection of sensitive bureau data and information should be discussed, if applicable. Reference the sections of previous deliverables that address system security issues, if appropriate. 2.5.2 Security During Implementation This section addresses security issues specifically related to the implementation effort, if any. For example, if LAN servers or workstations will be installed at a site with sensitive data preloaded on nonremovable hard disk drives, address how security would be provided for the data on these devices during shipping, transport, and installation because theft of the devices could compromise the sensitive data. 3.0 IMPLEMENTATION SUPPORT

This section describes the support software, materials, equipment, and facilities required for the implementation, as well as the personnel requirements and training necessary for the implementation. The information provided in this section is not sitespecific. If there are additional support requirements not covered by the subsequent sections, others may be added as needed.

3.1

Hardware, Software, Facilities, and Materials

In this section, list support software, materials, equipment, and facilities required for the implementation, if any. 3.1.1 Hardware This section provides a list of support equipment and includes all hardware used for testing the implementation. For example, if a client/server database is implemented on a LAN, a network monitor or "sniffer" might be used, along with test programs, to determine the performance of the database and LAN at high-utilization rates. If the equipment is site-specific, list it in Section 4, Implementation Requirements by Site. 3.1.2 Software This section provides a list of software and databases required to support the implementation. Identify the software by name, code, or acronym. Identify which software is commercial off-the-shelf and which is company-specific. Identify any software used to facilitate the implementation process. If the software is site-specific, list it in Section 4. 3.1.3 Facilities In this section, identify the physical facilities and accommodations required during implementation. Examples include physical workspace for assembling and testing hardware components, desk space for software installers, and classroom space for training the implementation staff. Specify the hours per day needed, number of days, and anticipated dates. If the facilities needed are site-specific, provide this information in Section 4, Implementation Requirements by Site. 3.1.4 Material This section provides a list of required support materials, such as magnetic tapes and disk packs. 3.2 Personnel

This section describes personnel requirements and any known or proposed staffing requirements, if appropriate. Also describe the training, if any, to be provided for the implementation staff. 3.2.1 Personnel Requirements and Staffing In this section, describe the number of personnel, length of time needed, types of skills, and skill levels for the staff required during the implementation period. If particular staff members have been selected or proposed for the implementation, identify them and their roles in the implementation. 3.2.2 Training of Implementation Staff

This section addresses the training, if any, necessary to prepare staff for implementing and maintaining the system; it does not address user training, which is the subject of the Training Plan. Describe the type and amount of training required for each of the following areas, if appropriate, for the system: System hardware/software installation System support System maintenance and modification

Present a training curriculum listing the courses that will be provided, a course sequence, and a proposed schedule. If appropriate, identify which courses particular types of staff should attend by job position description. If training will be provided by one or more commercial vendors, identify them, the course name(s), and a brief description of the course content. If the training will be provided by company staff, provide the course name(s) and an outline of the content of each course. Identify the resources, support materials, and proposed instructors required to teach the course(s). 3.3 Performance Monitoring

This section describes the performance monitoring tool and techniques and how it will be used to help decide if the implementation is successful. 3.4 CM Interface

This section describes the interactions required with the CM representative on CMrelated issues, such as when software listings will be distributed, and how to confirm that libraries have been moved from the development to the production environment. 4.0 IMPLEMENTATION REQUIREMENTS BY SITE

This section describes specific implementation requirements and procedures. If these requirements and procedures differ by site, repeat these subsections for each site; if they are the same for each site, or if there is only one implementation site, use these subsections only once. The "X" in the subsection number should be replaced with a sequenced number beginning with 1. Each subsection with the same value of "X" is associated with the same implementation site. If a complete set of subsections will be associated with each implementation site, then "X" is assigned a new value for each site. 4.1 Site Name or Identification for Site X

This section provides the name of the specific site or sites to be discussed in the subsequent sections.

4.1.1 Site Requirements This section defines the requirements that must be met for the orderly implementation of the system and describes the hardware, software, and site-specific facilities requirements for this area. Any site requirements that do not fall into the following three categories and were not described in Section 3, Implementation Support, may be described in this section, or other subsections may be added following Facilities Requirements below: Hardware Requirements--Describe the site-specific hardware requirements necessary to support the implementation (such as, LAN hardware for a client/server database designed to run on a LAN). Software Requirements--Describe any software required to implement the system (such as, software specifically designed for automating the installation process). Data Requirements--Describe specific data preparation requirements and data that must be available for the system implementation. An example would be the assignment of individual IDs associated with data preparation. Facilities Requirements--Describe the site-specific physical facilities and accommodations required during the system implementation period. Some examples of this type of information are provided in Section 3. 4.1.2 Site Implementation Details This section addresses the specifics of the implementation for this site. Include a description of the implementation team, schedule, procedures, and database and data updates. This section should also provide information on the following: Team--If an implementation team is required, describe its composition and the tasks to be performed at this site by each team member. Schedule--Provide a schedule of activities, including planning and preparation, to be accomplished during implementation at this site. Describe the required tasks in chronological order with the beginning and end dates of each task. If appropriate, charts and graphics may be used to present the schedule.

Procedures--Provide a sequence of detailed procedures required to accomplish the specific hardware and software implementation at this site. If necessary, other documents may be referenced. If appropriate, include a step-by-step sequence of the detailed procedures. A checklist of the installation events may be provided to record the results of the process. then address startup procedures in some detail. If the system will replace an already operating system, then address the startup and cutover processes in detail. If there is a period of parallel operations with an existing system, address the startup procedures that include technical and operations support during the parallel cycle and the consistency of data within the databases of the two systems. Database--Describe the database environment where the software system and the database(s), if any, will be installed. Include a description of the different types of database and library environments (such as, production, test, and training databases). -BB- Include the host computer database operating procedures, database file and library naming conventions, database system generation parameters, and any other information needed to effectively establish the system database environment. -CC- Include database administration procedures for testing changes, if any, to the database management system before the system implementation. Data Update--If data update procedures are described in another document, such as the operations manual or conversion plan, that document may be referenced here. The following are examples of information to be included: Control inputs Operating instructions Database data sources and inputs Output reports Restart and recovery procedures If the site operations startup is an important factor in the implementation,

4.1.3 Back-Off Plan

This section specifies when to make the go/no go decision and the factors to be included in making the decision. The plan then goes on to provide a detailed list of steps and actions required to restore the site to the original, preconversion condition. 4.1.4 Post-implementation Verification This section describes the process for reviewing the implementation and deciding if it was successful. It describes how an action item list will be created to rectify any noted discrepancies. It also references the Back-Off Plan for instructions on how to back-out the installation, if, as a result of the post-implementation verification, a no-go decision is made.

APPENDIX C-21 MAINTENANCE MANUAL


The Maintenance Manual provides maintenance personnel with the information necessary to maintain the system effectively. The manual provides the definition of the software support environment, the roles and responsibilities of maintenance personnel, and the regular activities essential to the support and maintenance of program modules, job streams, and database structures. In addition to the items identified for inclusion in the Maintenance Manual, additional information may be provided to facilitate the maintenance and modification of the system. Appendices to document various maintenance procedures, standards, or other essential information may be added to this document as needed. 1.0 INTRODUCTION

This section provides general reference information regarding the Maintenance Manual. Whenever appropriate, additional information may be added to this section. 1.1 Purpose

In this section, describe the purpose of the manual and reference the system name and identifying information about the system and its programs. 1.2 Points of Contact

This section identifies the organization(s) responsible for system development, maintenance, and use. This section also identifies points of contact (and alternate if appropriate) for the system within each organization. 1.3 Project Reference

This section provides a bibliography of key project references and deliverables produced during the information system development life cycle. If appropriate, reference the FRD, Systems Design Document, Test Plan, Test Analysis Report(s), Operations Manual, User Manual, load module description, source code description, and job control language (JCL) description. 1.4 Glossary

Provide a glossary with definitions of all terms, abbreviations, and acronyms used in the manual. If the glossary is several pages in length, place it as an appendix. 2.0 SYSTEM DESCRIPTION

The subsequent sections provide an overview of the system to be maintained.

2.1

System Application

This section provides a brief description of the purpose of the system, the functions it performs, and the business processes that the system is intended to support. If the system is a database or an information system, include a general description of the type of data maintained, and the operational sources and uses of those data. 2.2 System Organization

In this section, provide a brief description of the system structure, major system components, and the functions of each major system component. Include charts, diagrams, and graphics as necessary. 2.3 Security and the Privacy Act

This section provides an overview of the system's security controls and the need for security and protection of sensitive data. For example, include information regarding procedures to log on/off of the system, provisions for the use of special passwords, access verification, and access statuses as appropriate. If the system handles sensitive or Privacy Act information, include information regarding labeling system outputs as sensitive, or Privacy Act-related. In addition, if the system is covered by the Privacy Act, include a warning of the Privacy Act's civil and criminal penalties concerning the unauthorized use and disclosure of system data. 2.4 System Requirements Cross-Reference

This section contains an exhibit that cross-references the detailed system requirements with the system design document and test plan. This document, also referred to as a traceability matrix in other documents, assists maintenance personnel by tracing how the user requirements developed in the FRD are met in other products of the life cycle. Because this information is provided in the system design document, it may be appropriate to repeat or enhance that information in this section. 3.0 SUPPORT ENVIRONMENT

This section describes the operating and support environment for the system and program(s). Include a discussion of the equipment, support software, database characteristics, and personnel requirements for supporting maintenance of the system and its programs. 3.1 Equipment Environment

This section describes the equipment support environment, including the development, maintenance, and target host computer environments. Describe telecommunications and facility requirements, if any. 3.1.1 Computer Hardware

This section discusses the computer configuration on which the software is hosted and its general characteristics. The section should also identify the specific computer equipment required to support software maintenance if that equipment differs from the host computer. For example, if software development and maintenance are performed on a platform that differs from the target host environment, describe both environments. Describe any miscellaneous computer equipment required in this section, such as hardware probe boards that perform hardware-based monitoring and debugging of software. Include any telecommunications equipment. 3.1.2 Facilities This section describes the special facility requirements, if any, for system and program maintenance and includes any telecommunications facilities required to test the software. 3.2 Support Software

This section lists all support software--such as operating systems, transaction processing systems, and database management systems (DBMSs)--as well as software used for the maintenance and testing of the system. Include the appropriate version or release numbers, along with their documentation references, with the support software lists. 3.3 Database Characteristics

This section contains an overview of the nature and content of each database used by the system. Reference other documents for a detailed description, including the system design document as appropriate. 3.4 Personnel

This section describes the special skills required for the maintenance personnel. These skills may include knowledge of specific versions of operating systems, transaction processing systems, high-level languages, screen and display generators, DBMSs, testing tools, and computer-aided system engineering tools. 4.0 SYSTEM MAINTENANCE PROCEDURES

This section contains information on the procedures necessary for programmers to maintain the software. 4.1 Conventions

This section describes all rules, schemes, and conventions used within the system. Examples of this type of information include the following: System-wide labeling, tagging, and naming conventions for programs, units, modules, procedures, routines, records, files, and data element fields Procedures and standards for charts and listings

Standards for including comments in programs to annotate maintenance modifications and changes Abbreviations and symbols used in charts, listings, and comments sections of programs If the conventions follow standard programming practices and a standards document, that document may be referenced, provided that it is available to the maintenance team. 4.2 Verification Procedures

This section includes requirements and procedures necessary to check the performance of the system following modification or maintenance of the system's software components. Address the verification of the system-wide correctness and performance. Present, in detail, system-wide testing procedures. Reference the original development test plan if the testing replicates development testing. Describe the types and source(s) of test data in detail. 4.3 Error Conditions

This section describes all system-wide error conditions that may be encountered within the system, including an explanation of the source(s) of each error and recommended methods to correct each error. 4.4 Maintenance Software

This section references any special maintenance software and its supporting documentation used to maintain the system. 4.5 Maintenance Procedure

This section describes step-by-step, system-wide maintenance procedures, such as procedures for setting up and sequencing inputs for testing. In addition, present standards for documenting modifications to the system. 5.0 SOFTWARE UNIT MAINTENANCE PROCEDURES

For each software unit within the system, provide the information requested. If the information is identical for each of the software units, it is not necessary to repeat it for each software unit. If the information in any of the areas discussed below is identical to information provided in Section 4, System Maintenance Procedures, for the system maintenance procedures, then reference that area. This section should contain the following: Unit Name And Identification--Provide the name or identification of each software unit

that is a component of the system. Repeat the following information for each unit name. Description--Provide a brief narrative description of the software unit. Reference other sections within the life cycle that contains more detailed descriptive material. Requirements Cross-Reference--Include the detailed user requirements satisfied by this particular software unit. It may be a matrix that traces the system requirements from the FRD through the system design document and test plan for the specific software units. Other life cycle documentation may be referenced as appropriate. Conventions--Describe all rules, schemes, and conventions used within the program. If this information is program-specific, provide that information here. If the conventions are all system-wide, discuss them Section 4. If the conventions follow standard programming practices and a standards document, that document may be referenced here. Verification Procedures--Include the requirements and procedures necessary to check the performance of the program following modification or maintenance and addresses the verification of program correctness, performance, and detailed testing procedures. If the testing replicates development testing, it may be appropriate to reference the original development test plan. Error Conditions--Describe all program-specific error conditions that may be encountered provide an explanation of the source(s) of each error, and recommend methods to correct each error. If these error conditions are the same as the system-wide error conditions described in Section 4.3, Error Conditions, that section may be referenced here. Listings--Provide a reference to the location of the program listings.

APPENDIX C-22 OPERATIONS MANUAL


The Operations Manual provides computer control personnel and computer operators with a detailed operational description of the information system and its associated environments, such as machine room operations and procedures. 1.0 1.1 GENERAL Introduction and Purpose

Describe the introduction and purpose of the Operations Manual, the name of the system to which it applies, and the type of computer operation. 1.2 Project References

List, at a minimum, the User Manual, Maintenance Manual, and other pertinent documentation. 1.3 Glossary

List any definitions or terms unique to this document or computer operation and subject to interpretation by the user of this document. 2.0 2.1 SYSTEM OVERVIEW System Application

Provide a brief description of the system, including its purpose and uses. 2.2 System Organization

Describe the operation of the system by the use of a chart depicting operations and interrelationships. 2.3 Software Inventory

List the software units, to include name, identification, and security considerations. Identify software necessary to resume operation of the system in case of emergency. 2.4 Information Inventory

Provide information about data files and databases that are produced or referenced by the system. 2.4.1 Resource Inventory

List all permanent files and databases that are referenced, created, or updated by the system. 2.4.2 Report Inventory List all reports produced by the system. Include report name and the software that generates it. 2.5 Processing Overview

Provide information that is applicable to the processing of the system. Include system restrictions, waivers of operational standards, and interfaces with other systems. 2.6 Communications Overview

Describe the communications functions and process of the system. 2.7 Security

Describe the security considerations associated with the system. 2.8 Privacy Act Warning

Include a Privacy Act warning if the system is covered by the Privacy Act. 3.0 3.1 DESCRIPTION OF RUNS Run Inventory

List the runs showing the software components, the job control batch file names, run jobs, and purpose of each run if any portion of the system is run in batch mode. For online transaction-based processing, provide an inventory of all software components that must be loaded for the software system to be operational. 3.2 Run Sequence

Provide a schedule of acceptable phasing of the software system into a logical series of operations. If the system is a batch system, provide the execution schedule, which shows, at a minimum, the following: Job dependencies Day of week/month/date for execution Time of day or night (if significant) Expected run time in computer units

3.3

Diagnostic Procedures

Describe the diagnostic or error-detection features of the system, the purpose of the diagnostic features and the setup and execution procedures for any software diagnostic procedures. 3.4 Error Messages

List all error codes and messages with operator responses, as appropriate. 3.5 Run Descriptions

Provide detailed information needed to execute system runs. For each run include the information discussed in the subsequent sections. 3.5.1 Control Inputs Describe all operator job control inputs--for example, starting the run, selecting run execution options, activating an online or transaction-based system, and running the system through remote devices, if appropriate. 3.5.2 Primary User Contact Identify the user contact (and alternate if appropriate) for the system, including the person's name, organization, address, and telephone number. 3.5.3 Data Inputs Describe the following if data input is required at production time: Who is responsible for the source data Format of the data Data validation requirements Disposition of input source and created data

3.5.4 Output Reports Identify the report names, distribution requirements, and any identifying numbers expected to be output from the run. Describe reports to be produced from the system run by other than standard means. 3.5.5 Restart/Recovery Procedures Provide instructions by which the operator can initiate restart or recovery procedures for the run. 3.5.6 Backup Procedures

Provide instructions by which the operator can initiate backup procedures. Crossreference applicable instructions with procedures in the contingency plan. 3.5.7 Problem Reporting/Escalation Procedure Provide instructions for reporting problems to a point of contact. Include the person's name and phone numbers (that is, office, home, pager, etc.).

APPENDIX C-23 SYSTEMS ADMINISTRATION MANUAL


A Systems Administration Manual serves the purpose of an Operations Manual in distributed (client/server) applications. 1.0 1.1 GENERAL Introduction and Purpose

This section introduces and describes the purpose of the Systems Administration Manual, the name of the system to which it applies, and the type of computer operation. 1.2 Project References

This section lists, at a minimum, the User Manual, Maintenance Manual, and other pertinent available systems documentation. 1.3 Glossary

This section lists all definitions or terms unique to this document or computer operation and subject to interpretation by the user of this document. 2.0 2.1 SYSTEM OVERVIEW System Application

This section provides a brief description of the system, including its purpose and uses. 2.2 System Organization

This section describes the organization of the system by the use of a chart depicting components and their interrelationships. 2.3 Information Inventory

This section provides information about data files, and the databases that are produced or referenced by the system. 2.3.1 Resource Inventory

This section lists all permanent files and databases that are referenced, created, or updated by the system. 2.3.2 Report Inventory This section lists all reports produced by the system, including each report name and the software that generates it. 2.4 Processing Overview

This section provides information that is applicable to the processing of the system. It includes system restrictions, waivers of operational standards, and interfaces with other systems. 2.5 Communications Overview

This section describes the communications functions and process of the system. 2.6 Security

This section describes the security considerations associated with the system. 2.7 Privacy Act Warning

If this system is covered by the Privacy Act, then this section provides the appropriate Privacy Act notice and warning. 3.0 SITE PROFILE(S)

This section contains information pertaining to the site(s) where the application is running. That information includes the information contained in the subsequent sections. 3.1 Site Location(s)

This is the official address(es) of the site(s). 3.2 Primary Site

For the site(s) designated as primary, this section describes the essential personnel names and phone numbers for the automated data processing site contacts. 4.0 SYSTEMS ADMINISTRATION

This section introduces the responsibilities of the System Administrator, as discussed in the subsequent sections. 4.1 User and Group Accounts

This section introduces topics related to system users.

4.1.1 Adding/Deleting Users This section describes procedures to create/delete user logins and password accounts. 4.1.2 Setting User Permissions This section describes procedures to give users/restrict access to certain files. 4.1.3 Adding/Deleting User Groups This section contains procedures to create/delete user groups. 4.1.4 Setting User Roles/Responsibilities This section describes the roles that are granted to each group or individual user(s). 4.2 Server Administration

This section describes procedures to setup servers, including naming conventions and standards. 4.2.1 Creating Directories This section describes procedures to create server directories, and a complete description of the existing directories. 4.2.2 Building Drive Mappings This section describes procedures to create server drive mappings, and a complete description of the existing drive mappings. 4.3 System Backup Procedures

This section describes procedures for regularly scheduled backups of the entire network, including program and data storage, and the creation and storage of backup logs. 4.3.1 Maintenance Schedule (Daily, Weekly) This section describes documented daily and weekly backup schedules and procedures. The procedures should include tape labeling, tracking, and rotation instructions. 4.3.2 Off-Site Storage Procedures This section describes the location, schedule, and procedures for off-site storage. 4.3.3 Maintaining Backup Log This section describes procedures for creating and maintaining backup logs.

4.4

Printer Support

This section discusses procedures for installing, operating, and maintaining printers. 4.4.1 Maintenance (Configurations, Toner, Etc.) This section describes maintenance contracts, procedures to include installation and configuration of printer drivers, and equipment information. 4.4.2 Print Jobs (Moving, Deleting, Etc.) This section describes procedures to monitor, delete, and prioritize print jobs. 4.5 System Maintenance

This section discusses procedures for maintaining the file system. 4.5.1 Monitoring Performance and System Activity This section contains procedures to monitor system usage, performance, and activity. This may include descriptions of system monitoring tools, the hours of peak demand, a list of system maintenance schedules, etc. 4.5.2 Installing Programs and Operating System Updates This section includes procedures on how to install and test operating system updates. Once tested, instructions are to be provided to move/install the operating system updates to the operational environment. 4.5.3 Maintaining Audit Records of System Operation This section describes procedures for the setup and monitoring of the operating system and application audit trails. 4.5.4 Maintenance Reports This section includes procedures to create and update maintenance reports. 4.6 Security Procedures

This section describes the process for obtaining identifications (IDs) and passwords. It include information concerning network access and confidentiality requirements. 4.6.1 Issuing IDs and Passwords This section describes procedures for issuing IDs and passwords for operating systems and applications.

4.6.2 License Agreements This section describes licensing agreements and procedures for ensuring that all licenses are current. 4.7 Network Maintenance

This section describes procedures to maintain and monitor the data communications network. 4.7.1 LAN Design This section contains a layout of the network. 4.7.2 Communications Equipment This section contains a layout of the telecommunications equipment. 4.8 Inventory Management

This section contains a complete hardware and software inventory to include make, model, version numbers, and serial numbers. 4.8.1 Maintaining Hardware and Software Configurations This section describes procedures for maintaining the configuration information for the hardware and software actually installed. 4.8.2 Maintaining Floor Plans This section describes procedures for maintaining floor plans showing the location of all installed equipment and how to add/delete/modify the plans. 4.8.3 Installing Software/Hardware (New, Upgrades) This section describes procedures for installing new or upgrading hardware and software. 4.8.4 Maintaining Lists of Serial Numbers This section describes procedures for maintaining all serial number lists required at the site. 4.8.5 Maintain Property Inventory This section describes procedures for maintaining a property inventory at the site. 4.9 Training Backup Administrator

This section describes how to train a backup administrator.

4.10

End-User Support--Procedures for Support and Contract Information

This section provides necessary end-user contract information and the procedures for providing end-user support. 4.10.1 Escalation Procedures This section describes the formal escalation procedures to be used by System Administrators in response to priority user problem resolution requests. 4.11 Documentation

This section describes the documentation required of System Administrators as they perform system administration. 4.11.1 Troubleshooting Issues This section describes how to conduct and document troubleshooting activities. 4.12 Database Maintenance

This section introduces the responsibilities as they relate to the database and software application maintenance. 4.12.1 Database User/Group Access Describe who provides database access and the procedures for granting access. 4.12.2 Adding/Deleting Users to Database Provide the responsible person who adds and deletes users to the database. Include the procedures for adding/deleting users. 4.12.3 Setting User Permissions for Database Provide the responsible person who sets the permissions for users on the database. 4.12.4 Adding/Deleting Groups for Database Provide the procedures and responsible person for adding/deleting groups of individuals to the database. 4.12.5 Re-indexing Database Provide the procedures and responsible person for re-indexing the database after changes have been made. 4.12.6 Packing/Compressing Database Provide the procedures and responsible person for packing/compressing the database.

4.12.7 Data Entry/Modification/Deletion Provide the responsible person(s) who can make changes to the database. Include procedures for data entry, modifying, and deleting information from the database. 4.12.8 Database Reporting Provide the responsible person(s) for database reporting. Include what reports are generated, time frames, due dates and storage of the reports. 4.12.9 Database Backup and Restore Provide the person(s) responsible for performing database backup. This information should also be included in the Contingency Plan. Include procedures to follow if the database needed to be restored. 4.13 Application Maintenance

4.13.1 Application User/Group Access Describe who provides application access and the procedures for granting access. 4.13.2 Adding/Deleting Application users Provide the responsible person who adds and deletes users to the application. Include the procedures for adding/deleting users. 4.13.3 Setting User Application Permissions Provide the responsible person who sets the permissions for users of the application. 4.13.4 Adding/Deleting Application Groups Provide the procedures and responsible person for adding/deleting application groups. 4.13.5 Procedures to Start and Stop the Application Provide who has responsibility to start and stop the application. Include a rationale for stopping the application, and the steps to take to restart after identified problems are corrected. 4.13.6 Application Flow Chart Provide a flow chart depicting how the information moves from the application to the database. 4.13.7 Description of Major Program or Sub-program Modules Describe the processes within the application or module. If more than one module is operating for this system, describe each module.

APPENDIX C-24 TRAINING PLAN


The Training Plan outlines the objectives, needs, strategy, and curriculum to be addressed when training users on the new or enhanced information system. The plan presents the activities needed to support the development of training materials, coordination of training schedules, reservation of personnel and facilities, planning for training needs, and other training-related tasks. Training activities are developed to teach user personnel the use of the system as specified in the training criteria. To develop a Training Plan, refer to the outline. Include the target audiences and topics on which training must be conducted on the list of training needs. Include in the training strategy how the topics will be addressed. This information includes the format of the training program, the list of topics to be covered, materials, time, space requirements, and proposed schedules. Discuss QA in terms of testing, course evaluation, feedback, and course modification/enhancement. 1.0 INTRODUCTION

This section provides a management summary of the entire plan. It is not required to provide information in this section if the descriptions provided in the subsequent sections are sufficient. 1.1 Background and Scope

This section provides a brief description of the project from a management perspective. It identifies the system, its purpose, and its intended users. This section also provides a high-level summary of the Training Plan and its scope. 1.2 Points of Contact

This section provides the organization name (code) and title of key points of contact for system development. It includes such points of contact as the Project Manager, Program Manager, QA Manager, Security Manager, Training Coordinator, and Training representative, as appropriate. 1.3 Document Organization

The organization of the Training Plan is described in this section. 1.4 Project References

This section provides a bibliography of key project references and deliverables that have been produced before this point. For example, these references might include the Project Plan, FRD, Test Plan, Implementation Plan, Conversion Plan, and Systems Design Documents.

1.5

Security and the Privacy Act

If applicable, this section provides a brief discussion of the system's security controls and the need for security and protection of sensitive data. If the system handles sensitive or Privacy Act information, information should be included about labeling system outputs as sensitive or Privacy Act-related. In addition, if the system is protected by the Privacy Act, include a notification of the Privacy Act's civil and criminal penalties for unauthorized use and disclosure of system data. 1.6 Glossary

This section is a glossary of all terms and abbreviations used in the plan. If it is several pages in length, it may be placed as an appendix. 2.0 REQUIREMENTS TRACEABILITY (OPTIONAL)

If possible, this section presents a traceability matrix that lists user requirements as documented in the FRD and traces how they are addressed in such documents as the Systems Design Document, Test Plan, and Training Plan. Cross-reference the user requirements and training needs in the appropriate sections of the Training Plan. The requirements matrix may be broken into segments, if appropriate. For example, provide a separate matrix of the Training Plan sections that trace to particular sections in the Systems Design Document, FRD, and the SOW. 3.0 3.1 INSTRUCTIONAL ANALYSIS Development Approach

This section discusses the approach used to develop the course curriculum and ensure quality training products. This description includes the methodology used to analyze training requirements in terms of performance objectives and to develop course objectives that ensure appropriate instruction for each target group. The topics or subjects on which the training must be conducted should be listed or identified. 3.2 Issues and Recommendations

Any current and foreseeable issues surrounding training are included in this section. Recommendations for resolving each issue and constraints and limitations should also be listed. 3.3 Needs and Skills Analysis

This section describes the target audiences for courses to be developed. Target audiences include technical professionals, user professionals, data entry clerks, clerical staff members, ADP and non-ADP managers, and executives. The tasks that must be taught to meet objectives successfully and the skills that must be learned to accomplish those tasks are described in this section. A matrix may be used to provide this information. Also in this section, the training needs for each target audience are

discussed. If appropriate, this section should discuss needs and courses in terms of staff location groupings, such headquarters and field offices. 4.0 4.1 INSTRUCTIONAL METHODS Training Methodology

This section describes the training methods to be used in the proposed courses; these methods should relate to the needs and skills identified in Section 3.3, Needs and Skills Analysis, and should take into account such factors as course objectives, the target audience for a particular course, media characteristics, training setting criteria, and costs. The materials for the chosen training approach, such as course outlines, audiovisual aids, instructor and student guides, student workbooks, examinations, and reference manuals should be listed or discussed in this section. Sample formats of materials can be included in an appendix, if desired. 4.2 Training Database

If applicable, this section identifies and discusses the training database and how it will be used during computer systems training. It discusses the simulated production data related to various training scenarios and cases developed for instructional purposes. This section also explains how the training database will be developed. If this section is not applicable to the system involved, indicate "Not applicable." 4.3 Testing and Evaluation

This section describes methods used to establish and maintain QA over the curriculum development process. This description should include methods used to test and evaluate training effectiveness, evaluate student progress and performance, and apply feedback to modify or enhance the course materials and structure. One source of feedback could be a course- or module-specific course or instructor evaluation form. This form should gather trainee reactions on the following topics: scope and relevance of course or module, appropriateness of objectives, usefulness of assignments and materials, effectiveness of course training materials, stronger and weaker features of the course, adequacy of the facilities, timing or length of the course or module, effectiveness of the instructor(s), and participant suggestions and comments. 5.0 5.1 TRAINING RESOURCES Course Administration

This section describes the methods used to administer the training program, including procedures for class enrollment, student release, reporting of academic progress, course completion and certification, monitoring of the training program, training records management, and security, as required.

5.2

Resources and Facilities

This section describes the resources required by both instructors and students for the training, including classroom, training, and laboratory facilities; equipment such as an overhead projector, projection screen, flipchart or visual aids panel with markers, and computer and printer workstations; and materials such as memo pads and pencils, diskettes, viewgraphs, and slides. Information contained in this section can be generic in nature and can apply to all courses. Specific course information and special needs may be itemized here as well or, if many different courses are involved, in Section 6, Training Curriculum. 5.3 Schedules

This section presents a schedule for implementing the training strategy and indicating responsible parties. Included are key tasks to be completed, such as when to set up training facilities and schedule participants; other activities essential to training; and dates on which those tasks and activities must be finished. This section provides an overview of tasks; deliverables, such as approach and evaluation forms; scheduled versus actual milestones; and estimated efforts, such as the workplan. In the final version of the Training Plan, actual course schedules by location should be included. 5.4 Future Training

This section discusses scheduled training modifications and improvements. This information can include periodic updating of course contents, planned modifications to training environments, retraining of employees, and other predicted changes. Indicate procedures for requesting and developing additional training. 6.0 TRAINING CURRICULUM

This section provides descriptions of the components that make up each course. If a large number of courses or modules is described, place these descriptions in an appendix. Subsections of this section, if any, should be created for each course. Each course may comprise one or more modules. A course description should be developed for each module. At a minimum, each course description should include the course/module name; the length of time the course/module will take; the expected class size (minimum, maximum, optimal); the target audience; course objectives; module content/syllabus; specific training resources required, such as devices, aids, equipment, materials, and media to be used; and any special student prerequisites. The course description could also include information on instructor-to-student ratio, total number of students to be trained, estimated number of classes, location of classes, and testing methods.

APPENDIX C-25 USER MANUAL


1.0 INTRODUCTION

The User Manual contains all essential information for the user to make full use of the information system. This manual includes a description of the system functions and capabilities, contingencies and alternate modes of operation, and step-by-step procedures for system access and use. Use graphics where possible in this manual. The manual format may be altered if another format is more suitable for the particular project. 1.1 Purpose and Scope

This section provides a description of the purpose and scope of the User Manual. 1.2 Organization

This section describes the organization of the User Manual. 1.3 Points of Contact

This section identifies the organization codes and staff (and alternates if appropriate) who may assist the system user. If a help desk facility or telephone assistance organization exists, describe it in this section. 1.4 Project References

This section provides a bibliography of key project references and deliverables that have been produced prior to this point in the system development process. References might include the QA Plan, CM Plan, FRD, or Systems Design Document. 1.5 Primary Business Functions

This section discusses the business perspective of the user's primary responsibilities and tasks as they are supported by the system. Introduce the business functions so that the focus may rest on the systematic steps to support the business functions in later sections. 1.6 Glossary

This section provides a glossary of all terms and abbreviations used in the manual. If the glossary is several pages or more in length, it may be placed as an appendix. 2.0 SYSTEM CAPABILITIES

This section provides a brief overview of the system and its capabilities.

2.1

Purpose

This section describes the purpose of the application system. 2.2 General Description

This section provides an overview of the system's capabilities, functions, and operation, including the specific high-level functions performed by the system. Use graphics and tables, if appropriate. 2.3 Privacy Act Considerations

If the system is protected by the Privacy Act, include this notification of the Privacy Act's "Civil and Criminal Penalties" found in U.S. Code Section 552a, Records Maintained on Individuals, concerning the unauthorized use and disclosure of system data: Criminal Penalties - (1) Any officer or employee of an agency, who by virtue of employment or official position, has possession of, or access to, agency records which contain individually identifiable information, the disclosure of which is prohibited by U.S. Code Section 552a or by rules or regulations established thereunder, and who knowing that disclosure of the specific material is so prohibited, willfully discloses the material in any manner to any person or agency not entitled to receive it, shall be guilty of a misdemeanor and fined not more than $5,000. (2) Any officer or employee of any agency who willfully maintains a system of records without meeting the requirement to publish a notice in the Federal Register regarding the existence and character of the system of records, shall be guilty of a misdemeanor and fined not more than $5,000. (3) Any person who knowingly and willfully requests or obtains any record concerning an individual from an agency under false pretenses shall be guilty of a misdemeanor and fined not more than $5,000. 3.0 DESCRIPTION OF SYSTEM FUNCTIONS

This section describes each specific function of the system. In this high-level section, describe any conventions to be used in the associated subsections. Each of the subsequent sections should be repeated as often as necessary to describe each function within the system. The term "Function X" in the subsection title is replaced with the name of the function. 3.1 Function X Title

This section provides the title of the specific system function. 3.2 Detailed Description of Function

This section provides a description of each function. Include the following, as appropriate:

3.3

Purpose and uses of the function Initialization of the function, if applicable Execution options associated with this function Description of function inputs Description of expected outputs and results Relationship to other functions Summary of function operation Preparation of Function Inputs

This section defines required inputs. These inputs should include the basic data required to operate the system. The definition of the inputs include the following: Title of each input Description of the inputs, including graphic depictions of display screens Purpose and use of the inputs Input medium Limitations and restrictions Format and content on inputs, and a descriptive table of all allowable values for the inputs Sequencing of inputs Special instructions Relationship of inputs to outputs Examples 3.4 Results

This section describes expected results of the function. Include the following in the description, as applicable: Description of results, using graphics, text, and tables Form in which the results will appear Output form and content Report generation Instructions on the use of outputs Restrictions on the use of outputs, such as those mandated by Privacy Act and Computer Security Act restrictions Relationship of outputs to inputs Function-specific error messages Function-specific or context-sensitive help messages associated with this function Examples 4.0 OPERATING INSTRUCTIONS

This section provides detailed, step-by-step system operating instructions.

4.1

Initiate Operation

This section contains procedures for system logon and system initialization to a known point, such as a system main menu screen. This initialization procedure should describe how to establish the required mode of operation and set any initial parameters required for operation. Software installation procedures should be included if the software is distributed on diskette and should be downloaded before each use. 4.2 Maintain Operation

This section defines procedures to maintain the operation of the software where user intervention is required. 4.3 Terminate and Restart Operations

This section defines procedures for normal and unscheduled termination of the system operations and should define how to restart the system. 5.0 ERROR HANDLING

This section should address error message and help facilities. Additional information and subsections may be added as necessary. Included in this section should be a list of all possible error messages, including the following: 6.0 Any numeric error codes associated with the error A description of the meaning of the error A discussion of how to resolve the error HELP FACILITIES message message

This section describes any resident help software or any Service or contractor help desk facility that the user can contact for error resolution. Help desk telephone numbers should be included.

APPENDIX C-26 CONTINGENCY PLAN


OMB A-130, "Management of Federal Information Resources," appendix III, "Security of Federal Automated Resources," requires the preparation of plans for general support systems and major applications to ensure continuity of operations. The purpose of preparing for contingencies and disasters is to provide for the continuation of critical missions and business functions in the event of disruptions. The preparation for handling contingencies and disasters is generally called contingency planning, although it has many names (e.g., disaster recover, business continuity, continuity of operations, or business resumption planning). A contingency plan, which consists of an emergency response plan, a backup operations plan, and a post-disaster recovery plan, must be prepared for all general support system. A contingency plan consisting of a backup operations plan and a post-disaster recovery plan, must be prepared for all major applications. Typically, the major application contingency plan identifies the critical business functions needed to ensure the availability of essential services and programs, while the general support system contingency plan ensures the continuity of operations. Organizations whose major applications process at a general support facility should work with the facility management to develop a plan for post-disaster recover (i.e., which applications should be restored first). A contingency plan for general support systems describes the appropriate response to any situation that jeopardizes the continuity of information processing and/or telecommunications services. The plan is a series of written action items that document the process to be followed to support critical applications in the event that they are interrupted or destroyed. It provides an alternative means of automated processing or manual support during a disruption. Post-disaster recovery plans are detailed plans that provide for the orderly restoration of the general support system and telecommunications processing that are the primary means of performing business functions. One of the most important aspects of successful contingency planning is the continual testing and evaluation of the plan itself. Developing test plans which adequately and reliably exercise the contingency plan itself require considerable skill and great care to meet the objective of providing tests which are entirely realistic while still economically feasible. Care must be taken to see that the tests involve the most important systems to be supported in the contingency environment. A contingency plan should consist of three parts which address two distinct mutually exclusive sets of activities: 1.0 PRELIMINARY PLANNING This part of the plan describes the purpose, scope, assumptions, responsibilities, and overall strategy relative to the plan. Misconceptions concerning these concepts are quite common and must be clearly addressed to ensure they are communicated to those who must effectively respond to a contingency by implementing the plan. This

part should conclude with a section which provides for recording changes to the plan. Recommended contents for each section of Preliminary Planning are presented below. 1.01 Purpose This section should describe the reason and objective for having a contingency plan 1.2 Scope This section should describe in concise terms the extent of the coverage of the plan. 1.3 Assumptions A contingency plan is based on several categories of assumptions. Most can be established only after the completion of a risk assessment. See Security Risk Assessment in Appendix C-16. The entire list of assumptions for inclusion in the document cannot be completed until well along in the planning cycle. Included in the set of assumptions should be the following: - Nature of the Problem - Priorities - Commitments to or Assumptions of Support 1.4 Responsibilities This section should document the specific responsibilities as assigned by management to all activities and personnel associated with the plan. 1.5 Strategy The selection of appropriate strategies should follow the risk assessment. Until the risk assessment is completed, it is difficult to know the critical systems which must be maintained and the demands for resources which will be made to support those critical systems. Information for use in developing strategy is categories by areas as follows: 1.5.1 Emergency Response The strategies selected must provide a sufficient base upon which procedures can be devised which afford all personnel the immediate capability to effectively respond to emergency situations where life and property have been, or may be, threatened or harmed. 1.5.2 Backup Operations Most backup sites will not have sufficient equipment, personnel, supplies, etc., to sustain the complete operational requirements of another facility. In this case, a more detailed backup strategy must be developed. 1.5.3 Post-Disaster Recovery Actions

The strategy for recovery must be linked closely with that of Backup Operations as initiation of recovery actions may overlap At the very least, the post-disaster recovery plan should be the next step after backup operations in restoring the IT processing capability after partial or complete destruction of the facility, or other resources. 1.6 Record of Changes An essential element of any volatile document, such as a contingency plan, is a method of preparing, posting, and recording changes to the document. Entries in this section include change number; date; pages changed, deleted, inserted; name of person posting change; when posted; plan destruction; and other information as local conditions warrant. 1.7 Security of the Plan Once documented, the plan provides a significant amount of information about the organization which, is misused, could result in considerable damage or embarrassment. Consequently, the plan should be made available to just those personnel affected by the plan. 2.0 PREPARATORY ACTIONS This section of the contingency plan is a key part of the document. Preparatory Actions are critical to the emergency response, backup, and recovery form all but the most routine problems. 2.1 People No other functional element is so critical to the recovery from damaging losses. This section should provide names, addresses, and telephone numbers of all people who may be required in any backup or recovery scenario. Alternates for persons with peculiar skills or with skills in very short supply must be designated. 2.2 Data Care must be taken to make sure that multiple generations of backup files are taken so that the period spanned is short enough to satisfy the needs of currency and long enough to span the period needed for recovery. It is essential that all data on which backup and recovery are dependent be adequately recorded, maintained in a current condition, and backup copies adequately secured. 2.3 Software This section should contain the relationships of programs, to jobs, to data, to functional areas of supported organization, and to people and more, as may be needed.

2.4 Hardware Contingency plans should minimize, to the greatest feasible extent, the dependence on rapid replacement of hardware. This section should contain a list of the hardware and where replacements are available. 2.5 Communications A plan should be in place and agreed upon, including a schedule, by all parties who will have a role in establishing communications at an alternate site, to ensure recovery of communications at an alternate site within a reasonable period. 2.6 Supplies This section should describe any special supplies that are needed to recover critical operations 2.7 Transportation This section should describe the location of the backup capability. When choosing a backup site, consideration should be given to accessibility and should be free of whatever external problems are hampering the supported facility. 2.8 Space Describe the physical location where the recovery operations will take plane. When selecting the space, consider space which can be used temporarily and space into which the operation can relocate with relative permanence. 2.9 Power and Environmental Controls Describe the power and environment controls that are required for the recovery of IT processing. 2.10 Documentation This section of the plan should describe all backup documentation which is kept in the off-site facility. 3.0 ACTION PLAN This part of the plan consists of the "what to" actions to be accomplished by those personnel or activities identified in Section 1.4, Responsibilities. This part of the contingency plan includes those things which are to be done in response to a set of problem scenarios. Problem scenarios should be developed based on the outcome of the risk assessment. This part should only consist of concise, short instructions of the specific actions to take in response to each of the previously developed problem scenarios for each of the thee categories listed below.

3.1 Emergency Response This section should include the immediate actions to be taken to protect life and property and to minimize the impact of the emergency. 3.2 Backup Operations Describe what must be done to initiate and effect backup operations. Any "how to" instructions for each area should be included in Section 2.0 under preparatory actions. 3.3 Recovery Actions These instructions should be limited to describing what to do in effecting recovery from disasters. Note: See Federal Information Processing Standards (FIPS) Publication 87, "Guidelines for ADP Contingency Planning." for more detailed guidance.

APPENDIX C-27 SOFTWARE DEVELOPMENT DOCUMENT


1.0 INTRODUCTION

The software development document contains all preparations pertaining to the development of each unit or module, including the software, test cases, test results, approvals, and any other items that will help to explain the functionality of the software. The document is dynamic and is maintained by the system development team and should be constantly updated as the system's development progresses. The software development folder should include the following information for each unit: Description of the unit's functionality in narrative format Description of development methodologies used

Requirements in the functional requirements document allocated to this unit or module Completed traceability matrix displaying the unit's test cases satisfying the functiona requirements in the test plan 2.0 Source code listing Controlled libraries/directories/tables All data necessary to conduct unit testing Unit test results and analysis System Technical Lead sign off for design walk-through, approval of code, and completion of each unit Completed Software Development Document Check-Off sheet (attached) ROLES AND RESPONSIBILITIES

The team members have the following roles and responsibilities: The application developer assigned the primary responsibility for the module or unit creates a file folder for the unit, labels it according to the name of the unit, and places it in the appropriate place in the project team file cabinet.

The application developer(s) add copies of the indicated documentation to the folder as they are created. The project QA representative reviews the contents of the folder for completeness, and points out discrepancies to the developer assigned primary responsibility for the module or unit. The developer assigned primary responsibility for the module or unit completes the Software Development Document Check-Off sheet and arranges for the System Technical Lead review and approval when needed. The folder is available to all project team member for review, but if removed from the file cabinet, it must be replaced with a check-out card indicating who checked it out, when, and where it will be located. 2.1 PROCESS

Fill out the following sections of the Check-Off sheet: Requirements - Place a checkmark to the left of each question when it is determined that the answer is "Yes." This indicates that there is a match between the requirements traceability matrix and the requirements addressed by this module. Functionality - Place a checkmark to the left of each question when it is determined that the answer is "Yes." This indicates that a complete narrative description of the module's functionality is available, that a walk-through of the module's design was conducted before the start of the programming, and that System Technical Lead approval was granted to begin the programming work. Source Code - Place a checkmark to the left of the question when it is determined that a current copy of the program source listing has been placed in the folder. Libraries, Directories, and Tables - Place a checkmark to the left of the question when it is determined that the program source code and copybook libraries and associated electronic tables are identified, and copies, as needed are in the folder.

Development Methodologies - Place a checkmark to the left of the question when it is determined that the programming methodology descriptions are all included in the folder. Test Data - Place a checkmark to the left of each question when it is determined that the location and identity of all needed unit test data are included in the folder. Test Analysis - Place a checkmark to the left of the question when it is determined that the unit has been thoroughly tested. Sign-Off - Date and sign the certification for completion of coding and unit testing for the module. SOFTWARE DEVELOPMENT FOLDER CHECK-OFF SHEET REQUIREMENTS Has each requirement in the functional requirements document allocated to this unit been identified using the traceability matrix? Have derived requirements found during the development of this unit been identified, justified, and put in the functional requirements document? FUNCTIONALITY Is the functionality of this unit fully described? Is the description in narrative form? Was a design walk-through conducted? Was permission granted to begin programming? SOURCE CODE Is the source code listing of the unit included in this folder? LIBRARIES, DIRECTORIES, AND TABLES Are all coded entities included in the folder? DEVELOPMENT METHODOLOGIES Are all development methodologies for the development effort described in the folder?

TEST DATA Are all data necessary to conduct testing referenced in this folder?

TEST ANALYSIS Was the unit thoroughly tested and were all logical paths verified? SYSTEM DEVELOPER I certify that this software development document is complete, the unit ________ defined in this folder has successfully completed development and unit testing, and the unit is ready to be baselined and integrated into the system

Date ________

System Developer: _________________

System Technical Lead Initials: ________

APPENDIX C-28 INTEGRATION DOCUMENT


The integration document defines the activities necessary to integrate the software units and software components into the software item. The integration document contains an overview of the system, a brief description of the major tasks involved in the integration, the overall resources needed to support the integration effort. The plan is developed during the Development Phase and is updated during the Integration and Test Phase; the final version is provided in the Implementation Phase. 1.0 INTRODUCTION

This section provides an overview of the information system and includes any additional information that may be appropriate. 1.1 Purpose and Scope

This section describes the purpose and scope of the Integration Document. Reference the system name and identify information about the system to be integrated. 1.2 System Overview

This section provides a brief overview of the system to be integrated, including a description of the system and its organization. Describe the environment/infrastructure and how this unit or system will integrate into it. Include any risk involved and the mitigating procedures to reduce or eliminate that risk. 1.2.1 System Description This section provides an overview of the processes the system is intended to support. If the system is a database or an information system, provide a general discussion of the description of the type of data maintained and the operational sources and uses of those data. Also include all interfaces to other units or systems. 1.2.2 Unit Description This section provides an overview of the processes the unit (or module) is intended to support. If more than one unit is being integrated, provide descriptions of each unit in this section. 1.3 Project References

This section provides key project references and deliverables that have been produced before this point in the project development. Provide policies or laws that give rise to the need for this plan. For example, these references might include the Project Management Plan, Acquisition Plan, FRD, Test Plan, Conversion Plan, and Systems Design Document.

1.4

Glossary

Provide a glossary of all terms and abbreviations used in the document. If it is several pages in length, it may be placed in an appendix. 2.0 MANAGEMENT OVERVIEW

The subsequent sections provide a brief description of the integration and major tasks involved in this section. 2.1 Description of Integration

This section provides a brief description of the system units and the integration approach. 2.2 Responsibilities

In this section, identify the System Proponent, the name of the responsible or issuing organization, and titles and telephone numbers of the staff who serve as points of contact for the system integration. It should also include who has approval authority for each unit of the system. If this activity is contracted out, list the names and phone numbers of the contractor responsible for the development and integration. 2.3 Activities and Tasks

This section provides a brief description of each major task required for the integration of the system. Also include a schedule for when these tasks are expected to be completed. Add as many subsections as necessary to this section to describe all the major tasks adequately. Include the following information for the description of each major task, if appropriate: a) What the task will accomplish b) resources required to accomplish the task c) key person(s) responsible for the task d) criteria for successful completion of the task Examples of major tasks are the following: a) providing overall planning and coordination for the integration b) providing appropriate training for personnel c) providing appropriate documentation on each unit for integration d) providing audit or review reports e) documented software unit and database e) establish software requirements f) establish test procedures g) conduct unit testing h) conduct qualification testing i) integrate units into system

3.0

INTEGRATION SUPPORT

This section describes the support software, materials, equipment, and facilities required for the integration, as well as the personnel requirements and training necessary for the integration. 3.1 Resources and their Allocation

In this section, list all support software, materials, equipment, and facilities required for the integration. Describe the test environment and any resources needed. Describe the number of personnel needed and an estimate of the costs for them. 3.2 Training

This section addresses the training, if any, necessary to prepare for the integration and maintenance of the system; it does not address user training, which is the subject of the Training Plan. If contractors are performing the integration functions and activities, this may not be necessary. If, however, company staff are performing these activities, some training might be needed. List the course(s) needed by title, instructor and cost. 3.3 Testing

In this section, list all the test requirements for each unit. If more than one unit is being tested, include a description for each unit. Include the descriptions of the data included, procedures for testing, who is responsible for the testing and a schedule. This could be accomplished in one plan or several depending on the complexity of the unit being tested. 3.4 Change procedures and history

Include all changes made during the unit testing. This information should be included in the Configuration Management Plan and updated during the Development Phase.

APPENDIX C-29 TEST ANALYSIS REPORT


The Test Analysis Report documents software testing - unit/module, subsystem integration, system, user acceptance, and security - as defined in the test plan. The Test Analysis Report records results of the tests, presents the capabilities and deficiencies for review, and provides a means of assessing software progression to the next stage of development or testing. Results of each type of test are added to the software development document for the module or system being tested. Reports are created as required in the remaining phases. The set of Test Analysis Reports provides a basis for assigning responsibility for deficiency correction and follow up, and for preparation of a statement of project completion. Test Problem Report forms are generated as required and are attached to the Test Analysis Reports during testing at the integration level and higher. The disposition of problems found, starting with integration testing, will be traced and reported under configuration control. 1.0 PURPOSE

This section should present a clear, concise statement of the purpose for the Test Analysis Report. 2.0 SCOPE

This section identifies the software application system tested and the test(s) conducted covered by this Test Analysis Report. The report summarizes the results of tests already conducted and identifies testing that remains to be conducted. Provide a brief summary of the project objectives, and identify the System Proponent and users. 3.0 REFERENCE DOCUMENTS

This section provides a bibliography of key project references and deliverables applicable to system software testing. These references might include the FRD, User Manual, Operations Manual, Maintenance Manual, Test Plan, and prior Test Analysis Reports. 3.1 Security

This section describes any security considerations associated with the system or module being tested, the test analysis, and the data begin handled - such as confidentiality requirements, audit trials, access control, and recoverability. If this Test Analysis Report is not documenting the formal security test, also summarize the security capabilities included in the system or module test and itemize the specific security deficiencies detected while conducting the test. The results of specific tests, findings, deficiency analysis, and recommendations will be discussed in the subsequent sections. Reference those portions of this document

that specifically address system security issues. If no deficiencies were detected during the system or module test, state this fact. 3.2 Glossary

This section defines all terms and provides a list of abbreviations used in the Test Analysis Report. If the list is several pages in length, it may be placed as an appendix. 4.0 TEST ANALYSIS

This section describes the results of each test performed. Tests at each level should include verification of access control and system standards, functionality, and error processes. Repeat the subsections of this section for each test performed. 4.1 Test Name

The test performed for the specified unit, module, subsystem, or system is discussed in this section. For each test, provide the subsequent sections. 4.1.1 System Function A high-level description of the function tested and a description of system capabilities designed to satisfy these functions are contained in this section. Each system function should be described separately. 4.1.2 Functional Capability This section evaluates the performance of each function demonstrated in the test. This section also assesses the manner in which the test environment may be different from the operational environment and the effect of this difference on functional capabilities. 4.1.3 Performance Capability This section quantitatively compares the software performance characteristics with the criteria stated in the test plan. The comparison should identify deficiencies, limitations, and constraints detected for each function during testing. If appropriate, a test history or log can be included as an appendix. 5.0 SOFTWARE AND HARDWARE REQUIREMENTS FINDINGS

This section summarizes the test results, organized according to the numbered requirements listed in the Traceability section of the test plan. Each numbered requirement should be described in a separate section. Repeat the subsections of this section for each numbered requirement covered by the test plan.

5.1

Requirement Number and Name

The requirement number provided in the title to this section is the number from the requirements traceability matrix in the test plan and the name provided is the requirement's short name. 5.1.1 Findings This subsection briefly describes the requirement, including the software and hardware capabilities, and states the findings from one or more tests. 5.1.2 Limitations This subsection describes the range of data values tested, including dynamic and static data, for this requirement and identifies deficiencies, limitations, and constraints detected in the software and hardware during the testing. 6.0 6.1 SUMMARY AND CONCLUSIONS Demonstrated Capabilities

This section provides an overview and summary analysis of the testing program. Describe the overall capabilities and deficiencies of the testing software module or system. Where tests were intended to demonstrate one or more specific performance requirements, findings should be presented that compare the test results with the performance requirements. Include an assessment of any differences in the test environment versus the operational environment that may have had an effect on the demonstrated capabilities. Provide a statement, based on the results of the system or module test, concerning the adequacy of the system or module to meet overall security requirements. 6.2 System Deficiencies

This section describes test results showing software deficiencies. Identify all problems by name and number when placed under configuration control. Describe the cumulative or overall effect of all detected deficiencies on the system of module. 6.3 System Refinements

This section itemizes any indicated improvements in system design or operation based on the results of the test period. Accompanying each improvement or enhancement suggested should be a discussion of the added capability it provides and the effect on the system design. The improvements should be indicated by name and requirement number when placed under configuration control. 6.4 Recommendations and Estimates

This section provides a statement describing the overall readiness for system implementation. For each deficiency, address the effect on system performance and

design. Include any estimates of time and effort required for correction of each deficiency and any recommendations on the following: 6.5 The urgency of each correction Parties responsible for corrections Recommended solution or approach to correcting deficiencies Test Problem Report

This section contains copies of the test Problem Reports related to the deficiencies found in the test results. The Test Problem Report will vary according to the information system development project, its scope and complexity, etc. Test Problem Report forms are generated as required and are attached to the Test Analysis Reports during testing at the integration level and higher. The disposition of problems found, starting with integration testing, should be tracked and reported under configuration control. 6.6 Test Analysis Approval Determination Form

This section contains one copy of the Test Analysis Approval Determination form. This form briefly summarizes the perceived readiness for migration of the software. In the case of a User Acceptance Test, it serves as the user's recommendation for migration to production.

APPENDIX C-30 TEST ANALYSIS APPROVAL DETERMINATION


The Test Analysis Approval Determination (TAAD) form is completed immediately following the completion of testing (for all testing levels above the Integration test) for software to be delivered to the company. This form briefly summarizes the perceived readiness by the test engineer for delivery of the software to the next test phase. In the case of User Acceptance Test, it serves as the use's recommendation for fielding the software release or migration to production. The TAAD form for nonmainframe applications and mainframe migration is attached. The TAAD is to be initiated by the T&E organization and addressed to the company Manager. The form is signed by the responsible test engineer and supervisor. The company Manager signs the form signifying receipt from the test organization and is then attached to the test analysis report (see Appendix C-28 Test Analysis Report). Test Analysis Approval Determination Outline Non-Mainframes DATE: _________________ FROM: _________________ TO: __________________ We have reviewed the test results for the following application release: TITLE: We recommend: ( )a. Full acceptance. The Test Analysis Report describes any problems encountered, which are now corrected. ( )b. Full implementation with modifications implemented in a future release. The Test Analysis Report describes the outstanding discrepancies and the potential impact of these items to the End User. ( )c. Partial implementation. The Test Analysis Report details the recommended implementation limitations, and describes the impact and expected results of this alternative. ( )d. Rejection. The Test Analysis Report describes the reasons. SIGNATURE: ___________________ Test Engineer ___________________ Date

SIGNATURE:

___________________ T&E Leader

____________________ Date

Test Analysis Approval Determination For Mainframes DATE: _______________ FROM: _______________ TO: __________________ We have reviewed the test results for the following application migration: TITLE: We Recommend: ( )a. Migration to Production. The Tet An alysis Report describes any problems encountered, which are now corrected. ( )b. Migration to Production with modifications migrated in a future release. The Test Analysis Report describes the outstanding discrepancies and the potential impact of these items to the End User. ( )c. Partial migration. The Test Analysis Report details the recommended migration limitations, and describes the impact and expected results of this alternative. ( )d. Rejection. The Test Analysis Report describes the reasons. SIGNATURE: SIGNATURE: ___________________ Test Engineer ___________________ T&E Leader ___________________ Date ___________________ Date

APPENDIX C-31 TEST PROBLEM REPORT


The Test Problem Report form (see attached) is generated as required and is attached to the Test Analysis Report (Appendix C-29) during testing at the integration level and higher. The disposition of problems found, starting with integration testing, will be tracked and reported under configuration control. Generate multiple copies of Test Problem Reports related to the deficiencies found in the test results, and track problem(s) until they are resolved. The Test Problem Report will vary according to the information system development project, its scope, and its complexity. Test Problem Report Outline TO: __________________ FROM: _________________ PREPARER/CONTACT: __________________ PHONE: ______________ PROGRAM BEING TESTED: _________________

DESCRIPTION OF TEST PROBLEM A. Expected Results

B. Actual Results

DISPOSITION OF PROBLEM Action Taken and Date Corrected

Risk Impact if Problem Not Corrected

Changes Required for Existing Documentation

SIGNATURES ____________________ Project Manager ______________________ Date

_____________________ System Developer ___________________ Date

APPENDIX C-32 CHANGE IMPLEMENTATION NOTICE


For the _________________ System This notice is to request a change for the: Application Name: Located at: ___________________________ ___________________________

The change requested is as follows: _____________________________________________________________________ _______ _____________________________________________________________________ _______ _____________________________________________________________________ _______ _____________________________________________________________________ _______ _____________________________________________________________________ _______ _____________________________________________________________________ _______ _____________________________________________________________________ _______ _____________________________________________________________________ _______ _____________________________________________________________________ _______

This office rates the urgency of this request as: ____ Critical (Address ASAP, possibly with a patch) ____ Important (Address in the next version release) ____ Nice to have (Not necessary to operation, but would improve the system)

Change implementation ________________________________

notice

number

(CIN): Date:

Signed: ________________________ ____________ Local System Administrator Signed: ________________________ ____________ Suggesting End User

Date:

APPENDIX C-33 VERSION DESCRIPTION DOCUMENT


1.0 INTRODUCTION

The Version Description Document (VDD) is the primary configuration control document used to track and control versions of software to be released to the operational environment. It is a summary of the features and contents for the software build. It identifies and describes the version of the software CI being delivered to the company, including all changes to the software CI since the last VDD was issued. Every unique release of the software (including the initial release) shall be described by a VDD. If multiple forms of the software CI are released at approximately the same time (such as, to different sites) each must have a unique version number and a VDD. The VDD is part of the software CI product baseline. The VDD, when distributed, should be sent with a cover memo that summarizes, on a single page, the significant changes that are included in the release. This will serve as an executive summary for the details found in the attached VDD. The treatment should be titled, on the cover memo, as Summary of Changes. 1.1 Roles and Responsibilities

The following roles and responsibilities apply to the VDD: The CM representative will prepare the VDD with the help of the project team. Members of the Project Manager's organization will normally prepare Sections 3.5, Adaptation Data, through 3.9 , Glossary, appendices, and the Summary of Changes cover memo. 1.2 Process

A VDD is prepared according to the outline at the end of this document, and specific instructions are provided in the subsequent sections. 2.0 2.1 SCOPE Identification

Provide full identification number(s), title(s), and abbreviation(s); and, if applicable, provide the version number(s) and release number(s). Identify the intended recipients of the VDD. 2.2 Applicability

Identify the intended recipients of the software release and the operating system to be used.

2.3

System Overview

Proved a brief statement of the purpose of the system and the environments to which this document applies. Describe the general nature of the system and software; summarize the history of system development, operation, and maintenance; identify current and planned operating sites; and list other relevant documents. 2.4 Documentation Overview

Summarize the purpose and contents of this document and describe any security or privacy considerations associated with its use. 2.5 Points of Contact

Provide a list of both INS and performance contractor(s) points of contact involved in this effort. 3.0 REFERENCE DOCUMENTS

List the number, title, revision, and date of all documents referenced in or used in the preparation of this VDD. If this VDD is an update to an existing system, list the VDD that this version is replacing as a reference document. 4.0 VERSION DESCRIPTION

Summarize briefly the contents of the ensuing sub-paragraphs (to include materials contained in the release, software components of the subsystem software CI, documents used to establish the configuration of the software CI, and any known problems). 4.1 Inventory of Materials Released

List by CM numbers, titles, abbreviations, dates, version numbers, and release numbers (as applicable), all physical media (for example, listings, tapes, disks) and associated documentation that make up the software version being released. Include applicable security and privacy considerations for these items, safeguards for handling them, such as concerns for static and magnetic fields, and instructions and restrictions regarding duplication and license provisions. 4.2 Inventory of Software Contents

List by identifying numbers, titles, abbreviations, dates , version numbers, and release numbers (as applicable), all computer files that make up the software version being released. Any applicable security and privacy considerations should be included. 4.3 Changes Installed

List all changes incorporated into the software version since the previous version. Identify, as applicable, the TPRs, SCRs, and migration forms associated with each change and the effects, if

any, of each change on system operation and on interfaces with other hardware and software. (This section does not apply to the initial software version.) 4.4 Interface Compatibility

List and describe any other systems or CIs affected by the change(s) incorporated in the current version, if applicable. 4.5 Adaptation Data

Identify and reference all unique-to-site data contained in the software version. For software versions after the first, describe changes made to the adaptation data. 4.6 Bibliography of Reference Documents

List by identifying numbers, titles, abbreviations, dates , version numbers, and release numbers (as applicable), all documents that establish the current version of the software CI. 4.7 Installation Instructions

Provide or reference the following information, as applicable: Instructions for installing the software version, including instructions for deletion of old versions Identification of other changes that have to be installed for this version to be used, including site-unique adaptation data not included in the software version Security, privacy, or safety precautions relevant to the installation Procedures for determining if the version has been installed properly A point of contact to be consulted if there are problems or questions with the installation 4.8 Possible Problems and Known Errors

Identify any possible problems or known errors with the software version at the time of release, any steps being taken to resolve the problems or errors, and instructions (either directly or by reference) for recognizing, avoiding, correcting, or otherwise handling each one. The information presented will be appropriate to the intended recipient of the VDD (for example, a user agency may need advice on avoiding errors, a support agency on correcting them). 4.9 Glossary

Include an alphabetical listing of all acronyms, abbreviations, and their meanings as used in this document. Also provide a list of any terms and definitions needed to understand this document.

5.0

APPENDICES

Appendices may be used to provide information published separately for convenience in document maintenance (for example, charts, classified data, etc.). As applicable, each appendix will be referenced in the main body of the document where the data would normally have been provided. Appendices will be lettered alphabetically (A, B, etc.), and the pages will be numbered A-1, A-2, etc.

APPENDIX C-34 POST-IMPLEMENTATION REVIEW


The Post-Implementation Review is used to evaluate the effectiveness of the system development after the system has been in production for a period of time (normally 6 months). The objectives are to determine if the system does what it is designed to do : Does it support the user as required in an effective and efficient manner? The review should assess how successful the system is in terms of functionality, performance, and cost versus benefits, as well as assess the effectiveness of the life-cycle development activities that produced the system. The review results can be used to strengthen the system as well as system development procedures. The review is scheduled to follow the release of a system or system revision by an appropriate amount of time to allow determination of the effectiveness of the system. A representative from the functional development group or other member of the major user organization participates in the review. The System Proponent ensures that all documentation and all personnel needed to participate in the review are accessible. The reviewer and an assigned team collect the information needed for the PostImplementation Review by interviewing end users and their managers, system administrators, and computer operations personnel. The report is then prepared and provided to the user organization that requested it and the information systems organization, which may jointly use the findings to initiate other actions. The Post-Implementation Review is a free-form report, and not all sections are relevant or necessary to the final product. A description of the Post-Implementation Review Report is attached 1.0 1.1 INTRODUCTION Project Identification

Provide the identifying information associated with the project, including the applicable project control code, system acronym, and system title. 1.2 System Proponent

Provide the name of the System Proponent.

1.3

History of the System

Briefly describe the system's history and predecessor, if any. State the mission needs and information requirements, including how the system is expected to help users. 1.4 Functional System Description and Data Usage

Briefly describe what the system does functionally and how the data are used by the system. 2.0 EVALUATION SUMMARY

The purpose of this section is to provide a summary of the overall adequacy and acceptance of the system. 2.1 General Satisfaction With the System

Describe the users' experience with the implemented system. Comments should address the following: 2.2 The level of user satisfaction The strengths of the system, including specific areas of success Any problems Frequently used features Infrequently used features Features not used at all Suggested improvements Current Cost-Benefit Justification

Assess if the system is paying for itself. Base the assessment on the anticipated benefits and costs projected during the System Concept Development phase and revised during the subsequent phases of the systems development life cycle. This section is intended merely to review the costs and benefits and to provide details of costs and benefits in other sections. Comments should address the following: 2.3 The extent of the benefits and if they are reported to be less or greater than those projected in the development analysis and functional requirements report If any difference is permanent or will change over time If the system is or will be cost-justifiable. Needed Changes or Enhancements

Gauge the magnitude of effort needed to change or improve the system. Describe the nature and priority of the suggested changes; more detail will be provided in other sections. Comments should address the following:

3.0

The suggested changes The scope of the changes The resource requirements to effect the changes ANALYSIS AND IMPLEMENTATION

The purpose of this section is to gauge the completeness of the functional requirements and implementation according to the study. 3.1 Purpose and Objectives

Evaluate the adequacy of the original definition of purpose and objectives presented in the functional requirements document and if the objectives were achieved during implementation. Evaluate if any objectives have changed or should have changed. Comments should address the following: 3.2 Extent to which goals were met The level of the objective definition Extent to which objectives were met Possible changes to the objectives Scope

Analyze if proper limits were established in the feasibility study and if they were maintained during implementation. Comments should address the following: 3.3 Variations from the scope definition as agreed to in the concept development The extent to which the scope was followed Any possible future changes to the scope Benefits

Analyze if the benefits anticipated in the concept development and requirements definition analyses were realized. Detail all benefits, quantifiable or nonquantifiable, and any quantifiable resources associated with each. Comments should address the following: 3.4 The adequacy of the benefit definition The level of benefits realized The anticipated benefits that can be realized The reason for the variance between planned and realized benefits Development Cost

Determine the adequacy of the development cost estimated and any deviation between the estimated and actual development costs. Comments should address the following: The adequacy of the original and subsequent cost estimates The actual costs, by type The reasons for any difference between estimated and actual costs

3.5

Operating Cost

Analyze the adequacy of the operating cost estimates and any deviation between the estimate and the actual operating costs. Summarize the resources required to operate the system. Comments should address the following: 3.6 The adequacy of the operating estimates The actual operating costs The difference Training

Evaluate if all levels of user training were adequate and timely. Comments should address the following: 4.0 The timeliness of the training provided The adequacy of the training The appropriateness of the training Identification of additional training needs by job category The ability of the personnel to use the training provided OUTPUTS

The purpose of this section is to evaluate the adequacy and usefulness of the outputs from the system. Care must be taken to ensure that all reports are evaluated. 4.1 Usefulness

Measure the extent to which the users need the output of the system. Comments should address identification of the level of need, such as the following: Utility - Absolutely essential - Important and highly desirable - Interesting-- proves what is already known - Incomplete-- does not provide all the necessary information - Unnecessary Identification of information/reports needed but not currently generated by the system or unable to be obtained Demonstration of the ability to do without the reports Alternatives for obtaining the information where improvements can be achieved 4.2 Timeliness

Determine if output production performance meets user needs. Comments should address the frequency with which output arrives on time, early, and late; and the amount of follow up needed to obtain the output. 4.3 Data Quality

Assess the need to provide for effective use of shared data to enhance performance and system interoperability. Comments should address data accuracy and data reliability.

5.0

SECURITY

The purpose of this section is to determine if the system provides adequate security of data and programs. In addition to access security, procedures for backup, recovery, and restart should be reviewed. 5.1 Data Protection

Determine if the security, backup, recovery, and restart capabilities adequately safeguard data, including master, transaction and source. Online systems naturally require special techniques (such as, transaction logging). Comments should address the following: 5.2 The adequacy of the security, backup, recovery, and restart procedures The suggested changes The effort required to make the changes Disaster Recovery

Determine if appropriate files, programs, and procedures are established to enable recovery from a disaster resulting in the loss of data. Comments should address the following: The adequacy and currency of off site storage procedures The extent that procedures cover the following: - Master data - Transaction data - Source programs - Object programs - Documentation (such as, systems, operations, user manuals) The results of any adequacy-of-recovery test 5.3 Controls

Evaluate the adequacy of the controls on the database, source documents, transactions, and outputs of the system. Review each area thoroughly for financial controls and file control counts. Comments should address the following: The level of controls present in the entire system and on each component (such as, transaction and batch, and file) The adequacy of the controls, including the strengths and possible areas for improvement The amount of resources required, if any, to obtain improvements 5.4 Audit Trails

Review the ability to trace transactions through the system and the tie-in of the system to itself. Comments should address the following:

5.5

The thoroughness of the audit trails The level of improvements necessary, if any The requirements of audit trails as outlined in the trusted criteria-- such as, C2 requirements--if any Allowed Access

Evaluate the adherence to restriction of access to data. State desired privacy criteria for the system and then evaluate how the criteria have been followed up to this point. Comments should address the following: 6.0 Established privacy criteria Recommended privacy criteria Adherence to and violations of privacy The cost of providing this level of privacy The potential effect on individuals if the privacy criteria are not followed COMPUTER OPERATIONS

The purpose of this section is to ascertain the current level of operational activities. Although the user point of view is primary to the Post-Implementation Review Report, the computer operations view is also important to investigate. 6.1 Control of Work Flow

Evaluate the user interface with the data processing organization. Investigate the submittal of source material, the receipt of outputs, and any problems getting work in, through, and out of computer operations. Comments should address the following: 6.2 Any problems in accomplishing the work The frequency and extent of the problems Suggested changes The effort required to make the changes Scheduling

Determine the ability of computer operations to schedule according to user needs and to complete scheduled tasks. Comments should address the following: 6.3 Any problems in accomplishing the work The frequency and extent of the problems Suggested changes The effort required to make changes User Interface

Analyze the usability of the system. The transaction throughput and error rate are included in this analysis. Comments should address the following:

6.4

Volume of data processed (number of transactions) Number of errors made Frequency of problems with the interface Suggested changes Effort required to make the changes Computer Processing

Analyze computer processing issues and problems. Some areas to review are as follows: The correct or incorrect use of forms and off line files The adequacy of instructions (such as, forms lineup and proper responses on the console) The extent of reruns, if any 6.5 Peak Loads

Assess the ability of the system to handle peak loads and to resolve backlogs when they occur. Any offloading that could be helpful should be investigated. Comments should address the following: 7.0 The level of user satisfaction The adequacy of the response time (for online systems) The effect of delays on online and/or batch systems Suggested changes The effort required to make the changes MAINTENANCE ACTIVITIES

The purpose of this section is to evaluate maintenance activity involving the system. 7.1 Activity Summary

Provide a summary of maintenance activity to date. Provide type, number of actions, and scope of changes required. Estimate a projected maintenance workload based on the findings of the review. Discuss the adequacy of maintenance efforts or if major enhancement/revision is required. 7.2 Maintenance Review

Review completed and pending changes to the system. Provide conclusions regarding the benefits to be achieved by completing recommended changes. Provide conclusions about the amount of maintenance required based on activity that has occurred to date. 7.3 System Maintenance

Discuss the system maintenance based on the design, types of changes required, documentation, and knowledge about the system (both user and technical personnel).

APPENDIX C-35 IN-PROCESS REVIEW REPORT


The purpose of the In-Process Review is to assess the system's performance and user satisfaction. This review process occurs repeatedly to ensure that the system is performing cost-effectively and that it continues to meet the functional needs of the user. The report provides a description of the review process, its focus, and results. The report also may be used to document management approvals regarding further enhancements or development of the system under review. Depending on the timing and focus of the review, it may involve investigation of system response time, data base capacity, newer technologies available, business functions, and continued user satisfaction with the system. 1.0 INTRODUCTION

This section provides a brief description of introductory material in this section. Whenever appropriate, other information may be added. 1.1 Purpose

Describe the purpose of the In-Process Review in this section. Provide the name and identifying information about the system reviewed. Provide the timing of the review to differentiate the In-Process Review Reports created in the life of a system. 1.2 Scope

This section defines the boundaries of the system review. Because this review may address initial production performance and/or continued user satisfaction with the system, describe the specific aspects of the review conducted. 1.3 Project References

This section provides a bibliography of key project references produced for this system. 1.4 Points of Contact

Identify the System Proponent in this section. Provide the name of the responsible organization(s) and titles of the staff that conducted the system review. 1.5 Glossary

Provide a glossary of all terms and abbreviations used in the report that may be unfamiliar to the reader. If it is several pages in length, it may be placed as an appendix.

2.0

REVIEW PROCESS

This section provides an overview of the review process and its approach. This information may differ, depending on if the system review focused on performance, user satisfaction, or both. 2.1 System Overview

In this section, provide a brief general overview of the system reviewed. Examples of information that would be relevant to this section include the following: System name Date of initial implementation Date of latest modification Type of system (such as, administrative, financial) Type of processing (batch, online, transaction processing) Functional requirements traceability matrix System diagram and narrative description Number of computer programs within the system Programming language(s) and database management systems ( DBMSs) used Processing frequency Total monthly processing hours System origination (commercial off-the-shelf or Company-developed) Testing methodology (test data, live data) for initial system tests Testing methodology (test data, live data) for latest modification Availability of test results Date of last system review, if any List of users List of issues identified in last system review

Expand or contract this list as necessary to include all important aspects of the system that are relevant to the system review. It is not necessary to provide information on all the items in the list above if they are not relevant to the review. 2.2 Functional System Description and Data Usage

This section briefly describes what the system does functionally and how the data are used by the system. 2.3 Performance Review

This section should address the review, system response, capacity, correctness, and other pertinent performance factors. 2.3.1 System Response . To evaluate the responsiveness of the system, it may be appropriate to use a system monitor on mainframe-based systems. For example, for a transaction processing system, data on the number of times each of the system's programs have been executed during a workday, week, or month should be collected as appropriate. The monitor may also provide data on the average and worst-case

delay experienced by the programs and the average and worst-case queue lengths. To evaluate the responsiveness of the system for LAN-based systems, it may be appropriate to place a monitor or protocol analyzer on the LAN. 2.3.2 System Capacity. This section examines the ability of the system being reviewed to determine if any performance limitations result from operating the system near the limits of its capacity. For example, for mainframe computer applications using a DBMS, lack of main memory or selection of inappropriate buffer sizing during system generation could result in excessive disk reads and writes that would slow the applications' response. Similarly, a lack of adequate excess hard disk storage could result in large queues at disk controllers, substantially slowing the actual, observed average disk access time. On LAN-based systems, hosting all applications on a server with only one large disk drive and controller could lead to bottlenecks in performance for LAN-based applications. In addition, there may be simple system capacity considerations, such as in an application hosted on a system that has only enough hard disk space available for a limited number of data records. 2.3.3 System Correctness. Depending on the purpose of the review, it may be appropriate to examine the correctness of the system calculations, output, and reports. Presumably, this was done during unit testing and system testing. The intent of examining correctness during the Periodic System Review is to determine if the system is operating correctly with actual operational data inputs because the operational data may differ somewhat from the test data. Examples of items to be evaluated include the following: Values used for case codes Correctness of field definitions Values within data fields Combinations of data fields Calculations Missing data Extraneous data Amounts Units Logic paths and decisions Limits or reasonableness checks Signs Cross-footing of quantitative data Control totals

If the system maintains an audit trail log of hardware and software failures, examine this log to determine the failure modes of the system. 2.3.4 Other. This section discusses the approach to any performance issues that are not easily categorized under the topics listed in the previous sections. 2.4 User Satisfaction Review

A User Satisfaction Review records the effectiveness, correctness, and ease of use of the system from the users' perspective. If appropriate, this review can be used at any point during the information systems life cycle. Summarize the results of the review. 3.0 FINDINGS

This section describes the major findings, results, or conclusions of the review. The intent is to provide management with information for decision making about the system under review. Rank or prioritize the findings by importance, if applicable. Otherwise, group them logically, as appropriate. The ranking, prioritizing, or grouping facilitates making a logical linkage to Section on Recommendations, which provides recommendations regarding the findings. Provide as much detail as necessary to describe the findings clearly and to support the recommendations. The following list provides some examples of information that might be included in this section: What and where short-term problem areas exist (such as, missing tapes, misrouted material) What and where long-term problem areas exist (such as, machine capacity problems) References to meetings, interviews, and surveys conducted, with a description of their results or outcomes References to supporting statistics or reports 4.0 RECOMMENDATIONS

This section presents the recommendations derived from the findings of the system review. These recommendations should be phrased as proposals for management consideration and approval. Depending on the purpose and scope of the specific system review as defined by company management, it may be appropriate to provide multiple alternative recommendations for the findings. If alternative recommendations are provided, then describe the advantages, disadvantages, costs, tradeoffs, etc. associated with each alternative. Rank, prioritize, or group the recommendations logically, as appropriate. Relate the ranking, prioritization, or grouping of the recommendations to that of the findings in Section - Findings. 5.0 APPROVALS AND APPENDICES

Reference any management approvals and include any appendices needed to support the In-Process Review Report in this section. 5.1 Approval

Reference or describe the final approval of the In-Process Review Report, which may come from different levels of authority within the organization, depending on the size and importance of the items being reviewed. Thus, complete this section after the

initial In-Process Review Report has been presented to management. After management approval of the report, update this section. Also update this section to provide an annotation of the recommendations or course of action selected by management, if appropriate. 5.2 Appendices

In this section, reference any additional items necessary to support the system review from other documents, or add to the appendices, as appropriate.

APPENDIX C-36 USER SATISFACTION REVIEW


1.0 INTRODUCTION The User Satisfaction Review Survey is used to gather the data needed to analyze current user satisfaction with the performance capabilities of an existing application. The survey is administered annually, or as needed. The User Satisfaction Review outline (attached), illustrates this form. 2.0 ROLES AND RESPONSIBILITIES The following are the roles and responsibilities of team members in administering the User Satisfaction Review:

The Project Manager has primary responsibility for planning, scheduling, and conducting the user satisfaction review. The Quality Assurance organization provides major assistance in planning the review and in evaluating the results. The IRM Manager and the System Proponent are responsible for reviewing the results of the survey. Users are responsible for completing and returning the survey forms accurately and timely.

3.0 PROCESS Comply with the following process to distribute and complete the forms: The Project Manager or designated assistant completes the following items: - Name of system--The standard full name of the application, including version and release numbers - Data processing identification number--The Configuration Management, Configuration Item Identification number for the application - Type of system--The business purpose or function served by the application, and whether mainframe or client/server

- Part of system to be evaluated--The standard full name of the component or subsystem being evaluated - Name--The full name of the user who is responsible for completing the evaluation - Date--The date the form is due back to IRM - Title--The title of the user who is responsible for completing the evaluation - Organization--The name of the organization of the user who is responsible for completing the evaluation - Phone number/address--The phone number and address of the user who is responsible for completing the evaluation The user identified in the Name field completes the following items: - Extent of your knowledge about the system -- The percentage of the entire system you have a) studied documentation for; b) used; and c) written supplemental documentation about (including detailed problem reports) - The purposes for which you use the system--Check "Yes" to all that apply, and "No" to those that do not; list the "Other" uses in the space available - The importance of the system in your office environment--A number from 1 to 10, where 1 means not important at all, and 10 means very important - The ease of understanding of the system--A number from 1 to 10, where 1 means the system is difficult to use (labels, toolbar icons, helptext instructions are confusing, misleading, unclear, and not intuitive, and/or you are frequently required to repeat your work) and 10 means the system is very easy to understand (labels, toolbar icons, helptext instructions are clear, and the use of the system is nearly intuitive) - Can the system be used as is? --YES or NO field; checking a NO means that there needs to be some correction, and/or further identification and/or analysis of problems, before you are willing to resume use of the system - In your judgment, is the system--YES or NO field for each attribute listed; and further details and examples for the NO answers - In your opinion, should the system--YES or NO field for each attribute listed; and further details and examples for the YES answers - If you maintain manual records--Brief explanation, in the space available, of why it is necessary to maintain manual records to supplement the computer-processed information

- Does the system duplicate other information--YES or NO field; a brief explanation of a YES answer, indicating what information is duplicated and where it resides - Can you readily obtain the information from other sources--YES or NO field; if YES, a list of the information items and the sources - Do you supply the input data--YES or NO field - When you receive output, do you check it for quality--YES or NO field; if NO, an identification of the person or group that performs the quality check - Is the system ever rerun--YES or NO field; if YES, a description of the monthly frequency, the reason for the rerun, and the procedure used to validate the correctness of the rerun output - If you have had/were to have problems with the system--A description of with whom you did, or would discuss these problems; particularly representatives of the Immigration and Naturalization Service (INS) OIRM - Do you maintain correspondence with INS FIRM--YES or NO field; if YES, an attachment of copies of recent correspondence - Did anyone in your organization help design the system--YES or NO field - Could you effectively perform your duties--YES or NO field - Does the system save any clerical effort--YES or NO field; and an explanation of your answer either way - Can the system and its outputs be improved--YES or NO field; and an explanation of your answer either way - How often do you use the system--YES or NO field for each choice; and an explanation of your answer if you select YES for Other; for each YES answer needing more explanation, add an explanation in the space provided. User Satisfaction Review Outline This review is designed to obtain user feedback on information systems. Feedback gathered in this review can help to determine whether information systems are accurate and reliable. System Identification 1. Name of system 2. Data processing identification number, if any 3. Type of system

4. Part of system to be evaluated User Identification 6. Name 7. Date 8. Title 9. Organization 10. Phone number/address 11. What is the extent of your knowledge about the system? 12. For what purpose do you YES use the system? - Authorize changes to the system __ - Operate computer terminal __ - Maintain data controls __ - Design/program applications __ - Other (explain) __ NO __ __ __ __ __

13. In relation to the work of your office environment, estimate the importance of the system on a scale from 1 (not important) to 10 (very important). 14. State the ease of understanding the system on a scale from 1 (difficult) to 10 (very easy to understand). 15. Can the system be used as is, without correction, furtherYES identification, or analysis? 16. In your judgment, is the YES system: - Accurate and reliable? __ - Available when needed? __ - Current and up-to-date? __ - Useful? __ NO NO __ __ __ __

For each "No" answer, please explain below, and provide examples. 17. In your opinion, should the YES system: - Provide more data? __ - Provide less data? __ - Be combined with other output __ products? - Be considered obsolete? __ - Be improved to make your job__ NO __ __ __ __ __

easier? For each "yes" answer, please explain below. 18. If you maintain manual records to supplement computer-processed information, briefly explain why. 19. Does the system duplicate any other information youYES receive? If "yes," briefly explain. 20. Can you readily obtain, from other sources, theYES information in the system? If "yes," list the sources. 21. Do you supply the input YES data for this system? 22. When you receive output, do YES you check it for quality? NO NO NO NO

If "no," please identify the person or group performing this function.. 23. Is the system ever rerun? YES - How frequently? __ - Why were the reruns necessary? __ - How do you make sure that the __ rerun material is correct? NO __ __ __

24. If you have had/were to have problems with this system, with whom did/would you discuss them? 25. Do you maintain correspondence with INS OIRM or other userYES organizations concerning the system? If yes, attach copies of recent correspondence. 26. Did anyone in your organization help design theYES system? 27. Could you effectively YES perform your duties? - Without this system? __ - If the system output were__ NO NO __ __

NO

produced less often? 28. Does the system save any YES clerical effort? Explain.

NO

29. Can this system and its outputs be improved to make your job easier? __YES __NO Explain. 30. How often do you use this YES system? - Daily? __ - Weekly? __ - Monthly? __ - Annually? __ - Never? __ - Other? __ For each "yes" answer, please explain below. NO __ __ __ __ __ __

APPENDIX C-37 DISPOSITION PLAN


The Disposition Plan is the most significant deliverable in the disposition of the information system, and the plan will vary according to system and Service requirements. The objectives of the plan are to end the operation or the system in a planned, orderly manner and to ensure that system components and data are properly archived or incorporated into other systems. At the end of this task, the system will no longer exist as an independent entity. The completion of the systems life cycle is carefully planned and documented to avoid disruption of the organizations using the system or the operation of other systems that will use the data and/or software of the present system. The Disposition Plan needs to be an extension of the Records Management function. Records Management-- what is kept, what is a legal "record," retention period, etc.-is a topic beyond the scope of this SDLC. The software, hardware, and data of the current system are disposed of in accordance with organization needs and pertinent laws and regulations. Software or data of the system may be transferred to other existing systems, migrated to an entirely new system, or archived for future use. Hardware is made available for future use, added to surplus, or discarded. In conducting the disposition task, the following items should be considered:

All known users should be informed of the decision to terminate operation of the system before the actual termination date. Although the current system may be terminated, in many cases the data will continue to be used through other systems. The specific processing logic used to transfer the data to another system is developed as part of the data conversion planning for that system. In some instances, software may be transferred to a replacement system. For example, a component of the current system may become a component of the replacement system without significant rewriting of programs. Effective reactivation of the system in the future will depend heavily on having complete documentation. It is generally advisable to archive all documentation, including the life-cycle products generated during the earliest tasks of the life cycle as well as the documentation for users and for operation and maintenance personnel.

The Disposition Plan addresses how the various components of the system are handled at the completion of operations, including software, data, hardware, communications, and documentation. The plan also notes any future access to the system. The plan is lead/performed by the Project Manager; supported by the records management staff, the project team, and the functional staff; and reviewed by the QA manager. Other tasks include the following:

Notify all known users of the system of the planned date after which the system will no longer be available. Work with the FOIA/PA representative process any Federal Register regarding system of records notification. Copy data to be archived onto permanent storage media, and store media in a locationdesignated by the Disposition Plan. Work with the project management team for other systems to effect a smooth transfer of data from the current system to these systems. Copy software onto permanent storage media, and store media in location designated in Disposition Plan. (Software to be stored may include communications and systems software as well as application software.) Work with the project team for other systems to ensure effective migration of the current system software to be used by these systems. Store other life-cycle products, including system documentation, in archive locations designated by the Disposition Plan. Dispose of equipment used exclusively by this system in accordance with the Disposition Plan (refer to excess procedures). Complete and update the Disposition Plan to reflect actual disposition of data, software, and hardware. Plan for the shutdown of the project, including the reassignment of project staff, the storage of project records, and the release of project facilities

1.0 INTRODUCTION This section provides a brief description of introductory material. 1.1 Purpose and Scope This section describes the purpose and scope of the Disposition Plan. Reference the information system name and provide identifying information about the system undergoing disposition. 1.2 Points of Contact This section identifies the System Proponent. Provide the name of the responsible organization and staff (and alternates, if appropriate) who serve as points of contact for the system disposition. Include telephone numbers of key staff and organizations. 1.3 Project References This section provides a bibliography of key project references and deliverables that have been produced before this point in the project development. These documents may have been produced in a previous development life cycle that resulted in the initial version of the system now undergoing disposition or may have been produced in subsequent enhancement efforts as appropriate. 1.4 Glossary This section contains a glossary of all terms and abbreviations used in the plan. If it is several pages in length, it may be placed in an appendix.

2.0 SYSTEM DISPOSITION 2.1 Notifications This section describes the plan for notifying known users of the system being shut down, and other affected parties, such as those responsible for other, interfacing systems, and operations staff members involved in running the system. 2.2 Data Disposition This section describes the plan for archiving, deleting, or transferring to other systems the data files and related documentation in the system being shut down. 2.3 Software Disposition This section describes the plan for archiving, deleting, or transferring to other systems the software library files and related documentation in the system being shut down. 2.4 System Documentation Disposition This section describes the plan for archiving, deleting, or transferring to other systems the hardcopy and softcopy systems and user documentation for the system being shut down. 2.5 Equipment Disposition This section describes the plan for archiving, deleting, or transferring to other systems the hardware and other equipment used by the system being shut down. 3.0 PROJECT CLOSEDOWN 3.1 Project Staff This section describes the plan for notifying project team members of the shutdown of the system, and the transfer of these team members to other projects. 3.2 Project Records This section describes the plan for archiving, deleting, or transferring to other projects the records of project activity for the project that has been maintaining the system being shut down. 3.3 Facilities This section describes the plan for transferring or disposing of facilities used by the project staff for the system being shut down.

APPENDIX C-38 POST-TERMINATION REVIEW REPORT


The Post-Termination Review shall be performed after the end of the Disposition Phase. This phase-end review shall be conducted within 6 months after disposition of the system. The Post-Termination Review Report documents the lessons learned from the shutdown and archiving of the terminated system. The Post-termination Review Report details the findings of the Disposition Phase review. It can be used to document and ensure that all functions have been performed to dispose of the system. This report can provide a check-list of activities completed to dispose of the system. It should include the details where to find all products and documentation that has been archived. 1.0 INTRODUCTION This section provides a brief description of introductory material. 1.1 Purpose and Scope This section describes the purpose and scope of the Disposition Plan. Reference the information system name and provide identifying information about the system undergoing disposition. 1.2 Points of Contact This section identifies the System Proponent. Provide the name of the responsible organization and staff (and alternates, if appropriate) who serve as points of contact for the system disposition. Include telephone numbers of key staff and organizations. 1.3 Project References This section provides a bibliography of key project references and deliverables that have been produced before this point in the project development. These documents may have been produced in a previous development life cycle that resulted in the initial version of the system now undergoing disposition or may have been produced in subsequent enhancement efforts as appropriate. 1.4 Glossary This section contains a glossary of all terms and abbreviations used in the plan. If it is several pages in length, it may be placed in an appendix. 2.0 LESSONS LEARNED 2.1 Data Disposition

This section describes what happened to the data from the old system. Explain any problems or mis-haps that might have occurred during this phase. 2.2 Software Disposition This section describes what happened to the software from the old system. Explain any lessons learned from performing this task during the Disposition Phase. 2.3 Equipment Disposition This section describes what happened to the equipment from the old system. Explain where it is located, or if it was excessed, the date it was excessed. 3.0 ARCHIVING This section explains what happened to the old system. It could be a check off sheet or in report format. 3.1 Data This section explains where the old data is stored. If the old data was incorporated into a new system, so state here. 3.2 Software This section explains where the old software is located. 3.3 Hardware This section explains where the old hardware is located. If the equipment has been excessed, provide the date it was excessed.

S-ar putea să vă placă și