Sunteți pe pagina 1din 54

1. Distinguish between a software product and a software process?

Define software
engineering? What are the characteristics of software product?

The process that deals with the technical and management issues of software development
is called a software process. Clearly, many different types of activities need to be performed
to develop software. If the process is weak, the end product will undoubtedly suffer, but an
obsessive overreliance on process is also dangerous. We have embraced structured
programming languages (product) followed by structured analysis methods (process)
followed by data encapsulation (product) followed by the current emphasis on the Software
Engineering Institute's Software Development Capability Maturity Model (process). The
observations we can make on the artifacts of software and its development demonstrate a
fundamental duality between product and process. You can never derive or understand the
full artifact, its context, use, meaning, and worth if you view it as only a process or only a
product.

Def: A definition proposed by Fritz Bauer [NAU69] is

[Software engineering is] the establishment and use of sound engineering principles in order
to obtain economically software that is reliable and works efficiently on real machines.

The IEEE [IEE93] has developed a more comprehensive definition when it states:

Software Engineering: (1) The application of a systematic, disciplined, quantifiable approach


to the development, operation, and maintenance of software; that is, the application of
engineering to software. (2) The study of approaches as in (1).

Software Characteristics:

Software is (1) instructions (computer programs) that when executed provide desired
function and performance, (2) data structures that enable the programs to adequately
manipulate information, and (3) documents that describe the operation and use of the
programs.

Software is a logical rather than a physical system element. Therefore, software has
characteristics that are considerably different than those of hardware:

1. Software is developed or engineered, it is not manufactured in the classical sense.

Software costs are concentrated in engineering. This means that software projects cannot
be managed as if they were manufacturing projects.

2. Software doesn't "wear out."


3. Although the
industry is moving toward component-based assembly, most software continues to be
custom built.

2. What is the role of software architecture in a software system? What are the different
views of architecture, describe each of them in detail? How are the disciplines of
classical architecture and the software architecture similar? How do they differ?

Before discussing the role of software architecture in software systems, we have to first take a
firm look on what the software architecture is. At a top level, architecture is a design of a
system which gives a very high level view of the parts of the system and how they are
related to form the whole system. That is, architecture partitions the system in logical parts
such that each part can be comprehended independently, and then describes the system
in terms of these parts and the relationship between these parts. In fact any complex system
can be partitioned into simple logical parts with the aid of software architecture. So the
formal definition goes like this ―software architecture of a system is the structure or structures
of the system, which comprise software elements, the externally visible properties of those
elements, and the relationships among them”

Some of the important roles of software architecture is described the subsequent


paragraphs.

1. Understanding and communication:

An architecture description is primarily to communicate the architecture to its various


stakeholders, which include the users who will use the system, the clients who commissioned
the system, the builders who will build the system, and, of course, the architects. Through the
architecture description, the stakeholders gain knowledge about the macro property of the
existing system complexity and sometimes the description acts as an agreement and
negotiation to the arising conflicts among various stakeholders. It is thus clear that software
architecture facilitates communication and by partitioning the system into parts, and
presenting the system at a higher level of abstraction as composed of subsystems and their
interactions, detailed level complexity is hidden. It not only simplifies the new systems but also
helps to understand the existing system by specifying different high level views of the system
structure.

2. Reuse: Architecture descriptions can help software reuse. Reuse is considered one of
the main techniques by which productivity can be improved, thereby reducing the
cost of software. The architecture has to be chosen in a manner such that the
components which have to be reused can fit properly and together with other
components that may be developed, they provide the features that are needed.
Architecture also facilitates reuse among products that are similar and building
product families such that the common parts of these different but similar products
can be reused.

3. Construction and Evolution: As architecture partitions the system into parts, some
architecture provided partitioning can naturally be used for constructing the system,
which also requires that the system be broken into parts such that different teams (or
individuals) can separately work on different parts. It is clear from the definition that
the parts specified in a software is relatively independent (dependence comes
through the relationship). Not only does architecture guide the development, it also
establishes the constraints—the system should be constructed in a manner that the
structures chosen during the architecture creation are preserved. Delivery of the
product—a software system also evolves with time. During evolution, often new
features are added to the system. The architecture of the system can help in deciding
where to add the new features with minimum complexity and effort, and what the
impact on the rest of the system might be of adding the new features.

4. Analysis: It is highly desirable if some important properties about the behaviour of the
system can be determined before the system is actually built. This will allow the
designers to consider alternatives and select the one that will best suit the needs. It is
possible to predict the features or analyse the properties of the system being built from
its architecture. Such an analysis will also help in meeting the quality and the reliability
of the software.
Different Views of Software Architecture:

A view generally describes the structure of the systems. Many types of views have been
proposed but most of views generally belong to one of the following three types.

a) Module

b) Component and connector

c) Allocation

Module: In a module view, the system is viewed as a collection of code units, each
implementing some part of the system functionality. That is, the main elements in this view
are modules. These views are code-based and do not explicitly represent any runtime
structure of the system. Examples of modules are packages, a class, a procedure, a method,
a collection of functions, and a collection of classes. The relationships between these
modules are also code-based and depend on how code of a module interacts with another
module. Examples of relationships in this view are "is a part of" (i.e., module B is a part of
module A), "uses or depends on" (a module A uses services of module B to perform its own
functions and correctness of module A depends on correctness of module B,) and
"generalization or specialization" (a module B is a generalization of a module A.)

Component and Connector:

In C&C view, system is viewed as a collection of runtime entities called component. That is,
a component is a unit which has an identity in the executing system. Objects (not classes), a
collection of objects, and a process are examples of components. While executing,
components need to interact with others to support the system services. Connectors provide
means for this interaction. Examples of connectors are pipes and sockets. Shared data also
act as connectors.

Allocation View:

An allocation view focuses on how the different software units are allocated to resources like
the hardware, file systems, and people. That is, an allocation view specifies the relationship
between software elements and elements of the environments in which the software system
is executed. They expose structural properties like which processes run on which processor,
and how the system files are organized on a file system.

Classical Architecture VS Software Architecture:


I did not find the answer …if any of you get it please modify accordingly!!!!!!!!!!

3. Is it reasonable to assume that if software is easy to test, it will be easy to maintain?


Suppose that putting in extra effort in design and coding you increase the cost of these
phases by 15% but you reduce the cost of maintenance by 5%. Will you decide to put
in the extra effort and why?

No, it is not reasonable to assume that if software is easy to test, it will be easy to maintain.

Generally, the software produced is not easily maintainable because the development
process used for developing software does not have maintainability as a clear goal.

One possible reason for this is that the developers frequently develop the software, install it,
and then hand it over to a different set of people called maintainers.

Usually the maintainers don’t belong to the organization that developed the software.

In such a situation, clearly there is no incentive for the developers to develop maintainable
software, as they don’t have to put in the effort for maintenance.

This situation can be alleviated if the developers are made responsible for maintenance, at
least for a couple of years after the delivery of software.

Suppose that by putting extra effort in design and coding you increase the cost of these
phases by 15%, but you reduce the cost of maintenance by 5%. Will you decide to put the
extra effort? Why?

A Software process consists of phases, and a process generally includes requirements,


design, coding, and testing phases. Of the development cost, an example distribution of
effort with the different phases

could be:
There are some observations we can make from the table.

First is that design and coding consume only a small percentage of the development effort.
This is against the common naive notion that developing software is largely concerned with
writing programs and that programming is the major activity.

Second observation from the data is that testing consumes the most resources during
development. Underestimating the testing effort often causes the planners to allocate
insufficient resources for testing, which in turn, results in unreliable software or schedule
slippage.

In the life of software, the maintenance costs generally exceed the development costs.
Clearly, if we want to reduce the overall cost of software or achieve "global" optimality in
terms of cost rather than "local" optimality in terms of development cost only, the goal of
development should be to reduce the maintenance effort. That is, one of the important
objectives of the development project should be to produce software that is easy to
maintain and the process used should ensure this maintainability.

Both testing and maintenance depend heavily on the quality of design and code, and
these costs can be considerably reduced if the software is designed and coded to make
testing and maintenance easier. Hence, during the early phases of the development
process the prime issues should be "can it be easily tested" and "can it be easily modified".

So, it is reasonable to put extra effort in design and coding to reduce the cost of testing and
maintenance.

4. Suppose a program for solving a problem costs X, and industrial level software for
solving that problem costs 10X. Where do you think this extra 9X cost is spent? Suggest
a possible breakdown of this extra cost with you reasons and justifications?

A program for solving a problem costs X, and an industrial level software for solving that
problem costs 10X. To find out where this extra 9X cost is spent first of all we need to
understand the difference between a ―problem solving program‖ and ―industrial level
software‖.

An industrial level software is very different from a program in terms of quality (including
usability, reliability, robustness, portability, etc.). High quality requires heavy testing, which
consumes 30-50% of total development effort. A large amount of investment is done by the
company in terms of resources like man power and money. Developer is a team consisting a
number of persons and for them bugs are not tolerable, UI is very important, and they
prepare documentation.
However, in a simple program to solve a problem, quality (including usability, reliability,
robustness, portability, etc) is not important. Developer is a single user and for him bugs are
tolerable, UI is not important, and he doesn’t prepare documentation. No investment is
required.

Therefore, industrial strength software is very expensive primarily due to the fact that software
development is extremely labor-intensive.

What is the cost of Software?

 Software costs often dominate computer system costs.

 Software costs more to maintain than to develop from scratch. The maintenance costs
for systems with a long-life, may be several times its development costs

What are the costs of Software Engineering?

 Roughly 60% of costs are development costs, and 40% are testing costs. Evolution costs
often exceed development costs in custom software

 Costs vary depending on the type and requirements of system under development

 The distribution of costs depend on the development model used

Ex :

1. waterfall model -

specification – 1.5X

design- 2.5X

development- 2X

integration and testing- 4X

2. iterative development model-

specification – 1X
iterative development – 6X

system testing – 3X

3. component based software engineering –

specification – 2X

development- 3X

integration and testing- 5X

4. development and evolution costs for long lifetime systems –

system development – 2.5X

system evolution – 7.5X

HERE ARE MORE STORIES YOU CAN WRITE IF YOU WANT :

To get an idea of the costs involved, let us consider the current state of practice in the
industry. Lines of code (LOC) or thousands of lines of code (KLOC) delivered is by far the
most commonly used measure of software size in the industry. As the main cost of producing
software is the manpower employed, the cost of developing software is generally measured
in terms of person-months of effort spent in development. And productivity is frequently
measured in the industry in terms of LOC (or KLOC) per person-month.

Let us look at costs involved

 Productivity = 500 LOC/PM

 Cost to the company = $10K/PM

 Cost per LOC = $20

 i.e, each line of delivered code costs about $20.

 A simple application for a business may have 20KLOC to 50KLOC

 Cost = $100K to $1Million

 Can easily run on $10K-$20K hardware

 So HW costs in an IT solution are small as compared to SW costs.


The HW/SW ratio for a computer system has shown a reversal from the early years.

 In 50s , HW:SW :: 80:20

 In 80s , HW:SW :: 20:80

 So , SW is very expensive

 Importance of optimizing HW is not much

 More important to optimize SW

As the figure shows, in the early days, the cost of hardware used to dominate the system
cost. As the cost of hardware has lessened over the years and continues to decline, and as
the power of hardware doubles every 2 years or so (the Moore's law) enabling larger
software systems to be run on it, cost of software has now become the dominant factor in
systems.

5. What is the relationship between a process model, process specification, and process
for a project? What are the key outputs in a development project that follows the
prototyping model? Write an ETVX specification for this process model?

6. Give a brief description of software prototyping and briefly discuss the various
prototyping techniques? Write an example; illustrate the use of prototyping as a
method for problem analysis? Discuss its advantages and disadvantages?

The basic idea prototyping model is that instead of freezing the requirements before any
design or coding can proceed, a prototype i.e. incomplete version of software program is
built to help understand the requirements. This prototype is developed based on the
currently known requirements. Development of the prototype obviously undergoes design,
coding, and testing, but each of these phases is not done very formally or thoroughly.

By using this prototype, the client can get an actual feel of the system; because the
interactions with the prototype can enable the client to, better understand the requirements
of the desired system. Using prototyping model, we can overcome the limitations of waterfall
model.

The process of prototyping involves the following steps

1. Identify basic requirements:

Determine basic requirements including the input and output information desired. Details,
such as security, can typically be ignored.

2. Develop Initial Prototype:

The initial prototype is developed that includes only user interfaces.

3. Review:

The customers, including end-users, examine the prototype and provide feedback on
additions or changes.

4. Revise and Enhance the Prototype:

Using the feedback both the specifications and the prototype can be improved.
Negotiation about what is within the scope of the contract/product may be necessary. If
changes are introduced then a repeat of steps #3 and #4 may be needed.

Different prototyping techniques:

1. Throwaway prototyping

Also called close ended prototyping. Throwaway or Rapid Prototyping refers to the creation
of a model that will eventually be discarded rather than becoming part of the final delivered
software. After preliminary requirements gathering is accomplished, a simple working model
of the system is constructed to visually show the users what their requirements may look like
when they are implemented into a finished system.

Throwaway Prototyping involved creating a working model of various parts of the system at
a very early stage, after a relatively short investigation. The method used in building it is
usually quite informal, the most important factor being the speed with which the model is
provided. The model then becomes the starting point from which users can re-examine their
expectations and clarify their requirements. When this has been achieved, the prototype
model is 'thrown away', and the system is formally developed based on the identified
requirements.

The steps in this approach are:

1. Write preliminary requirements

2. Design the prototype

3. User experiences/uses the prototype, specifies new requirements

4. Repeat if necessary

5. Write the final requirements

6. Develop the real products

2. Evolutionary prototyping

Evolutionary Prototyping (also known as breadboard prototyping) is quite different from


Throwaway Prototyping. The main goal when using Evolutionary Prototyping is to build a very
robust prototype in a structured manner and constantly refine it. "The reason for this is that
the Evolutionary prototype, when built, forms the heart of the new system, and the
improvements and further requirements will be built. In Evolutionary Prototyping, developers
can focus themselves to develop parts of the system that they understand instead of
working on developing a whole system.

To minimize risk, the developer does not implement poorly understood features. The partial
system is sent to customer sites. As users work with the system, they detect opportunities for
new features and give requests for these features to developers. Developers then take these
enhancement requests along with their own and use sound configuration-management
practices to change the software-requirements specification, update the design, recode
and retest.

3. Incremental prototyping

The final product is built as separate prototypes. At the end the separate prototypes are
merged in an overall design.

4. Extreme prototyping

Extreme Prototyping as a development process is used especially for developing web


applications. Basically, it breaks down web development into three phases, each one based
on the preceding one. The first phase is a static prototype that consists mainly of HTML
pages. In the second phase, the screens are programmed and fully functional using a
simulated services layer. In the third phase the services are implemented. The process is
called Extreme Prototyping to draw attention to the second phase of the process, where a
fully-functional UI is developed with very little regard to the services other than their contract.
Illustration of prototyping as a method of problem analysis:

The development of the prototype typically starts when the preliminary version of the
requirements specification document has been developed. At this stage, there is a
reasonable understanding of the system and its needs and which needs are unclear or likely
to change. After the prototype has been developed, the end users and clients are given an
opportunity to use the prototype and play with it. Based on their experience, they provide
feedback to the developers regarding the prototype: what is correct, what needs to be
modified, what is missing, what is not needed, etc. Based on the feedback, the prototype is
modified to incorporate some of the suggested changes that can be done easily, and then
the users and the clients are again allowed to use the system. This cycle repeats until, in the
judgment of the prototypers and analysts, the benefit from further changing the system and
obtaining feedback is outweighed by the cost and time involved in making the changes
and obtaining the feedback. Based on the feedback, the initial requirements are modified
to produce the final requirements specification, which is then used to develop the
production quality system.

Advantages of prototyping

Reduced time and costs: Prototyping can improve the quality of requirements and
specifications provided to developers. Because changes cost exponentially more to
implement as they are detected later in development, the early determination of what the
user really wants can result in faster and less expensive software.

Improved and increased user involvement: Prototyping requires user involvement and allows
them to see and interact with a prototype allowing them to provide better and more
complete feedback and specifications. The presence of the prototype being examined by
the user prevents many misunderstandings and miscommunications that occur when each
side believe the other understands what they said.

Experience of developing the prototype helps in the main development.

Disadvantages of prototyping

Insufficient analysis: The focus on a limited prototype can distract developers from properly
analyzing the complete project. This can lead to overlooking better solutions, preparation of
incomplete specifications or the conversion of limited prototypes into poorly engineered final
projects that are hard to maintain.
User confusion of prototype and finished system: Users can begin to think that a prototype,
intended to be thrown away, is actually a final system that merely needs to be finished or
polished. This can lead them to expect the prototype to accurately model the performance
of the final system when this is not the intent of the developers.

Developer misunderstanding of user objectives: Developers may assume that users share
their objectives (e.g. to deliver core functionality on time and within budget), without
understanding wider commercial issues.

Developer attachment to prototype: Developers can also become attached to prototypes


they have spent a great deal of effort producing; this can lead to problems like attempting
to convert a limited prototype into a final system when it does not have an appropriate
underlying architecture.

Excessive development time of the prototype: A key property to prototyping is the fact that it
is supposed to be done quickly. If the developers lose sight of this fact, they very well may try
to develop a prototype that is too complex. When the prototype is thrown away the
precisely developed requirements that it provides may not yield a sufficient increase in
productivity to make up for the time spent developing the prototype.

Expense of implementing prototyping: the start up costs for building a development team
focused on prototyping may be high.

7. Explain different process models along with their relative merits and demerits. Explain
four significant attributes that every software product should possess.

In the software development process we focus on the activities directly related to


production of the software, for example, design, coding, and testing. As the development
process specifies the major development and quality control activities that need to be
performed in the project, the development process really forms the core of the software
process. The management process is decided based on the development process. Due to
the importance of the development process, various models have been proposed:

• Waterfall– the oldest and widely used


• Prototyping
• Iterative – currently used widely
• Time-boxing

Waterfall Model
• Linear sequence of stages/phases
• Requirements – HLD – DD – Coding – Testing – Deploy
• A phase starts only when the previous has completed; no feedback
• The phases partition the project, each addressing a separate concern
• Linear ordering implies each phase should have some output
• The output must be validated/ certified
• Outputs of earlier phases: work products
• Common outputs of a waterfall: SRS, project plan, design docs, test plan and reports,
final code, supporting docs

Waterfall Advantages
• Conceptually simple, cleanly divides the problem into distinct phases that can be
performed independently
• Natural approach for problem solving
• Easy to administer in a contractual setup – each phase is a milestone

Waterfall disadvantages
• Assumes that requirements can be specified and frozen early
• May fix hardware and other technologies too early
• Follows the ―big bang‖ approach – all or nothing delivery; too risky
• Very document oriented, requiring docs at the end of each phase Waterfall Usage
• Has been used widely
• Well suited for projects where requirements can be understood easily and
technology decisions are easy
• i.e. for familiar type of projects it still may be the most optimum
Prototyping
• Prototyping addresses the requirement specification limitation of waterfall
• Instead of freezing requirements only by discussions, a prototype is built to
understand the requirements
• Helps alleviate the requirements risk
• A small waterfall model replaces the requirements stage
• Development of prototype
– Starts with initial requirements
– Only key features which need better understanding are included in prototype
– No point in including those features that are well understood
– Feedback from users taken to improve the understanding of the requirements
• Cost can be kept low
– Build only features needing clarification
– ―quick and dirty‖ – quality not important, scripting etc can be used
– Things like exception handling, recovery, standards are omitted
– Cost can be a few % of the total
– Learning in prototype building will help in building, besides improved requirements

Advantages: requirement will be more stable, requirement frozen later, experience


helps in the main development

Disadvantages: Potential hit on cost and schedule

Applicability: When requirements are hard to elicit and confidence in requirements is


low; i.e. where requirements are not well understood

Iterative Development
• Counters the ―all or nothing‖ drawback of the waterfall model
• Combines benefit of prototyping and waterfall
• Develop and deliver software in increments
• Each increment is complete in itself
• Can be viewed as a sequence of waterfalls
• Feedback from one iteration is used in the future iterations
• Products almost always follow it
• Used commonly in customized development also
– Businesses want quick response for SW
– Cannot afford the risk of all-or-nothing
• Newer approaches like XP, Agile- all rely on iterative development

Advantages: Get-as-you-pay, feedback for improvement,

Disadvantages: Architecture/design may not be optimal, rework may increase, total


cost may be more

Applicability: where response time is important, risk of long projects cannot be taken,
all requirement not
known

Timeboxing:
• Iterative is linear sequence of iterations
• Each iteration is a mini waterfall – decide the specifications, then plan the iteration
• Time boxing – fix iteration duration, then determine the specifications
• Divide iteration in a few equal stages
• Use pipelining concepts to execute iterations in parallel
• General iterative development – fixes the functionality for each iteration, then plan
and executes it
• In time boxed iterations – fix the duration of iteration and adjust the functionality to fit
it
• Completion time is fixed; the functionality to be delivered is flexible
• This itself very useful in many situations
• Has predictable delivery times
• Overall product release and marketing can be better planned
• Makes time a non-negotiable parameter and helps focus attention on schedule
• Prevents requirements bloating
• Overall dev time is still unchanged
• What if we have multiple iterations executing in parallel
• Can reduce the average completion time by exploiting parallelism
• For parallel execution, can borrow pipelining concepts from hardware
• This leads to Time-boxing Process Model
• Development is done iteratively in fixed duration time boxes
• Each time box divided in fixed stages
• Each stage performs a clearly defined task that can be done independently
• Each stage approximately equal in duration
• There is a dedicated team for each stage
• When one stage team finishes, it hands over the project to the next team
• With this type of time boxes, can use pipelining to reduce cycle time
• Like hardware pipelining – view each iteration as an instruction
• As stages have dedicated teams, simultaneous execution of different iterations is
possible

Advantages: Shortened delivery times, other adv of iterative, distr. Execution

Disadvantages: Larger teams, proj mgmt is harder, high synchronization needed, CM is


harder

Applicability: When short delivery times v. imp.; architecture is stable; flexibility in


feature grouping

Summary of waterfall
Strengths weakness Type of projects
Simple All or nothing – too Well understood
Easy to execute risky problems, short
Intuitive and Requirement frozen duration projects,
logical early automation of
Easy contractually May chose outdated existing manual
hardware/technology systems
Disallows changes
No feedback from
users
Encourages
requirement bloating

Summary of prototype
Strengths weakness Types of projects
Helps requirement Front heavy Systems with
elicitation Possibly higher cost novice users; or
Reduces risk and schedule areas with
Better and more Encourages requirement
stable final system requirement uncertainty.
bloating Heavy reporting
Disallows later based systems can
change benefit from UI
prototyping

Summary of iterative:
strengths weakness Types of
projects
Regular deliveries, Overhead of planning For businesses
leading to biz benefit each iteration where time is
Can accommodate Total cost may important;
changes naturally increase risk of long
Allows user feedback System architecture projects
Avoids requirement and design may suffer cannot be
bloating Rework may increase taken;
Naturally prioritizes requirement
requirement not known and
Allows reasonable exit evolve with
points time
Reduces risks

Summary of time-boxing:

strengths weakness Types of projects


All benefits of PM becomes more Where very short
iterative complex delivery times are
Planning for Team size is larger very important
iterations Complicated – Where flexibility
somewhat easier lapses can lead to in grouping
Very short losses features
delivery times Architecture is
stable

At the top level, for a software product, these attributes can be defined as follows:

• Functionality. The capability to provide functions which meet stated and implied needs
when the software is used

• Reliability. The capability to maintain a specified level of performance

• Usability. The capability to be understood, learned, and used


• Efficiency. The capability to provide appropriate performance relative to the amount of
resources used

• Maintainability. The capability to be modified for purposes of making corrections,


improvements, or adaptation

• Portability. The capability to be adapted for different specified environments without


applying actions or means other than those provided for this purpose in the product

8. What is the need for validating the requirements? Explain any requirement validation
techniques. Mention the six specific design process activities. Give explanation for two
of them.

The development of software starts with a requirements document, which is also used
to determine eventually whether or not the delivered software system is acceptable. It is
therefore important that the requirements specification contains no errors and specifies the
client's requirements correctly. Furthermore the longer an error remains undetected, the
greater the cost of correcting it. Hence, it is extremely desirable to detect errors in the
requirements before the design and development of the software begin. Due to the nature
of the requirement specification phase, there is a lot of room for misunderstanding and
committing errors, and it is quite possible that the requirements specification does not
accurately represent the client's needs. The basic objective of the requirements validation
activity is to ensure that the SRS reflects the actual requirements accurately and clearly. A
related objective is to check that the SRS document is itself of "good quality‖.

As requirements are generally textual documents that cannot be executed,


inspections are eminently suitable for requirements validation. Consequently, inspections of
the SRS, frequently called requirements review, are the most common method of validation.
Because requirements specification formally specifies something that originally existed
informally in people's minds, requirements validation must involve the clients and the users.
Due to this, the requirements review team generally consists of client as well as user
representatives. Although the primary goal of the review process is to reveal any errors in the
requirements, such as those discussed earlier, the review process is also used to consider
factors affecting quality, such as testability and readability. During the review, one of the
jobs of the reviewers is to uncover the requirements that are too subjective and too difficult
to define criteria for testing that requirement. Checklists are frequently used in reviews to
focus the review effort and ensure that no major source of errors is overlooked by the
reviewers.

Sorry! couldn’t get answers for the last two parts of this question.
9. Differentiate between the following terms: Milestone and deliverable. Requirements
Definition and Specification, a software product and a software process.
(a) Milestone and Deliverable:
A milestone is a point some way through a project plan that indicates how far
the project has progressed. It is an important point in time such as ―contract signed‖,
―project approved‖ etc, and usually has zero duration. A deliverable refers to a
tangible product that is produced signifying the reaching of the milestone. It is
something that people work on to finish the project – such as a completed
requirements document, or a test plan. A milestone has a symbolic purpose and is not
a physical creation (and therefore can represent things that are not tangible, such as
hitting the 3 month mark of the project). A deliverable, on the other hand, defines the
class of tangible (i.e. physical) products that the project produces on its path towards
achieving its ultimate goal. As a result, a project will have significantly fewer milestones
than deliverables.

(b) Requirement Definition and Specification:


A Requirement is defined as "(1) A condition of capability needed by a user to
solve a problem or achieve an objective; (2) A condition or a capability that must be
met or possessed by a system ... to satisfy a contract, standard, specification, or other
formally imposed document." [91]. Note that in software requirements we are dealing
with the requirements of the proposed system, that is, the capabilities that the system,
which is yet to be developed, should have. The goal of the requirements activity is to
produce the Software Requirements Specification (SRS) that describes what the
proposed software should do without describing how the software will do it. Somehow
the requirements for the system that will satisfy the needs of the clients and the
concerns of the users have to be communicated to the developer. The problem is that
the client usually does not understand software or the software development process,
and the developer often does not understand the client's problem and application
area. This causes a communication gap between the parties involved in the
development project. A basic purpose of software requirements specification is to
bridge this communication gap. SRS is the medium through which the client and user
needs are accurately specified to the developer.
(c) A Software product and a Software process:

The product is something tangible that you get after going through a process. After doing
some systems analysis work, the analyst will write a report. Doing the analysis is a process and
the report is a product of that phase. Each product can be used as part of carrying out the
next process. The analyst's report could be used as part of the design process. The resulting
design, which is a product, is then used in writing the programs, which is another process.
Thus there is no product that is not formed through a process.

10. Explain the spiral model. Discuss the features of a software project for which the spiral
model could be a preferred model? Justify your answer?
11. Describe the role of management on software development. Describe the major phases
in software development. Discuss the error distribution and cost of correcting the errors
during development.

Effective software management focuses on the four P’s: people, product, process, and
project. The order is not arbitrary. The manager who forgets that software engineering work is
an intensely human endeavor will never have success in project management. A manager
who fails to encourage comprehensive customer communication early in the evolution of a
project risks building an elegant solution for the wrong problem. The manager who pays little
attention to the process runs the risk of inserting competent technical methods and tools into
a vacuum. The manager who embarks without a solid project plan jeopardizes the success
of the product. The cultivation of motivated, highly skilled software people has been
discussed since the 1960s. In fact, the ―people factor‖ is so important that the Software
Engineering Institute has developed a people management capability maturity model (PM-
CMM), ―to enhance the readiness of software organizations to undertake increasingly
complex applications by helping to attract, grow, motivate, deploy, and retain the talent
needed to improve their software development capability‖. Management of people
includes recruiting, selection, performance management, training, compensation, career
development, organization and work design, and team/culture development. Organizations
that achieve high levels of maturity in the people management area have a higher
likelihood of implementing effective software engineering practices.

Before a project can be planned, product1 objectives and scope should be established,
alternative solutions should be considered, and technical and management constraints
should be identified. Without this information, it is impossible to define reasonable (and
accurate) estimates of the cost, an effective assessment of risk, a realistic breakdown of
project tasks, or a manageable project schedule that provides a meaningful indication of
progress.

As process of SE proceeds, there are many reasons that software projects get into trouble.
The scale of many development efforts is large, leading to complexity, confusion, and
significant difficulties in coordinating team members. Uncertainty is common, resulting in a
continuing stream of changes that ratchets the project team. Interoperability has become a
key characteristic of many systems. To deal with these issues an effective management is
necessary.

Phases of Development Process:-

It is a set of phases, each phase being a sequence of steps. Sequence of steps for a phase
defines the methodologies for that phase. We divide the development process into phases
as this helps to

 To employ divide and conquer


 each phase handles a different part of the problem
 helps in continuous validation
Commonly, development has these activities: Requirements analysis, architecture, design,
coding, testing, delivery. Different models perform them in different manner.

 Requirements analysis:
o Because software is always part of a larger system (or business), work begins by
establishing requirements for all system elements and then allocating some
subset of these requirements to software. This system view is essential when
software must interact with other elements such as hardware, people, and
databases. The requirements gathering process is intensified and focused
specifically on software. To understand the nature of the program(s) to be built,
the software engineer ("analyst") must understand the information domain for
the software, as well as required function, behavior, performance, and
interface. Requirements for both the system and the software are documented
and reviewed with the customer.
 Design:
o Software design is actually a multistep process that focuses on four distinct
attributes of a program: data structure, software architecture, interface
representations, and procedural (algorithmic) detail. The design process
translates requirements into a representation of the software that can be
assessed for quality before coding begins. Like requirements, the design is
documented and becomes part of the software configuration
 Coding:
o The design must be translated into a machine-readable form.The code
generation step performs this task. If design is performed in a detailed
manner,code generation can be accomplished mechanistically.
 Testing:
o Once code has been generated, program testing begins. The testing process
focuses on the logical internals of the software, ensuring that all statements have
been tested, and on the functional externals; that is, conducting tests to
uncover errors and ensure that defined input will produce actual results that
agree with required results.
 Support
o Software will undoubtedly undergo change after it is delivered to the customer
(a possible exception is embedded software). Change will occur because errors
have been encountered, because the software must be adapted to
accommodate changes in its external environment (e.g., a change required
because of a new operating system or peripheral device), or because the
customer requires functional or performance enhancements. Software
support/maintenance reapplies each of the preceding phases to an existing
program rather than a new one.

The notion that programming is the central activity during software development is largely
due to programming being considered a difficult task and sometimes an "art." Another
consequence of this kind of thinking is the belief that errors largely occur during
programming, as it is the hardest activity in software development and offers many
opportunities for committing errors. It is now clear that errors can occur at any stage during
development. An example distribution of error occurrences by phase is:

 Requirements 20%
 Design 30%
 Coding 50%

As we can see, errors occur throughout the development process. However, the cost of
correcting errors of different phases is not the same and depends on when the error is
detected and corrected. The relative cost of correcting requirement errors as a function of
where they are detected is shown

11. What is software


engineering? Describe the following process model with their relative merits and
demerits: Waterfall model, Evolutionary development model. What are the main
phases in software development following the waterfall model, explain briefly each of
them?

Software Engineering is an engineering discipline that applies theories, methods and tools to
solve problems related to software production and maintenance.

Water Fall Model:

Description:
The simplest process model is the waterfall model, which states that the phases are
organized in a linear order. The model was originally proposed by Royce though variations of
the model have evolved depending on the nature of activities and the flow of control
between them. In this model, a project begins with feasibility analysis. Upon successfully
demonstrating the feasibility of a project, the requirements analysis and project planning
begins. The design starts after the requirements analysis is complete, and coding begins after
the design is complete. Once the programming is completed, the code is integrated and
testing is done. Upon successful completion of testing, the system is installed. After this, the
regular operation and maintenance of the system takes place.

Limitations:

1. It assumes that the requirements of a system can be frozen (i.e., baselined) before the
design begins. This is possible for systems designed to automate an existing manual
system. But for new systems, determining the requirements is difficult as the user does
not even know the requirements. Hence, having unchanging requirements is
unrealistic for such projects.

2. Freezing the requirements usually requires choosing the hardware (because it forms a
part of the requirements specification). A large project might take a few years to
complete. If the hardware is selected early, then due to the speed at which hardware
technology is changing, it is likely that the final software will use a hardware
technology on the verge of becoming obsolete. This is clearly not desirable for such
expensive software systems.

3. It follows the "big bang" approach—the entire software is delivered in one shot at the
end. This entails heavy risks, as the user does not know until the very end what they are
getting. Furthermore, if the project runs out of money in the middle, then there will be
no software. That is, it has the "all or nothing" value proposition.

4. It is a document-driven process that requires formal documents at the end of each


phase.

Advantages:

1. Conceptually it is very simple and it can easily divide the problem into distinct phases
that can be performed independently.

2. Natural Approach for problem solving

3. Easy to administer is contractual set up- each phase acts as a mile stone.
Evolutionary Development Model:

For software products that have their feature sets redefined during development because of
user feedback and other factors, the traditional waterfall model is no longer appropriate.
Evolutionary Development Model (EVO) uses small, incremental product releases, frequent
delivery to users, and dynamic plans and processes. Although EVO is relatively simple in
concept, its implementation at HP has included both significant challenges and notable
benefits.

The EVO development model divides the development cycle into smaller, incremental
waterfall models in which users are able to get access to the product at the end of each
cycle. The users provide feedback on the product for the planning stage of the next cycle
and the development team responds, often by changing the product, plans, or process.
These incremental cycles are typically two to four weeks in duration and continue until the
product is shipped.

Benefits:

Successful use of EVO can benefit not only business results but marketing and internal
operations as well. From a business perspective, the biggest benefit of EVO is a significant
reduction in risk for software projects. This risk might be associated with any of the many ways
a software project can go awry, including missing scheduled deadlines, unusable products,
wrong feature sets, or poor quality. By breaking the project into smaller, more manageable
pieces and by increasing the visibility of the management team in the project, these risks
can be addressed and managed. Because some design issues are cheaper to resolve
through experimentation than through analysis, EVO can reduce costs by providing a
structured, disciplined avenue for experimentation. Finally, the inevitable change in
expectations when users begin using the software system is addressed by EVO’s early and
ongoing involvement of the user in the development process. This can result in a product
that better fits user needs and market requirements.

EVO allows the marketing department access to early deliveries, facilitating development of
documentation and demonstrations. Although this access must be given judiciously, in some
markets it is absolutely necessary to start the sales cycle well before product release. The
ability of developers to respond to market changes is increased in EVO because the
software is continuously evolving and the development team is thus better positioned to
change a feature set or release it earlier.

Short, frequent EVO cycles have some distinct advantages for internal processes and people
considerations. First, continuous process improvement becomes a more realistic possibility
with one-to-four-week cycles. Second, the opportunity to show their work to customers and
hear customer responses tends to increase the motivation of software developers and
consequently encourages a more customer-focused orientation. In traditional software
projects, that customer-response payoff may only come every few years and may be so
filtered by marketing and management that it is meaningless

Main Phases of Water Fall Model Method:

1. Requirement Specification: A Software Requirements Specification (SRS) is a complete


description of the behavior of the system to be developed. It includes a set of use
cases that describe all the interactions the users will have with the software. Use cases
are also known as functional requirements. In addition to use cases, the SRS also
contains non-functional (or supplementary) requirements. Non-functional requirements
are requirements which impose constraints on the design or implementation (such as
performance engineering requirements, quality standards, or design constraints).

2. Design: Design activity begins with a set of requirements. Design is done before the
system is implemented. It is the intermediate language between requirements and
coding. Goal of design phase is to create a plan to satisfy the requirements and
perhaps it is the most critical activity during system development. Design also
determines the major characteristics of a system.
3. Implementation or Coding: The goal of the coding or programming activity is to
implement the design in the best possible manner. The coding activity affects both
testing and maintenance profoundly. The time spent in coding is a small percentage
of the total software cost, while testing and maintenance consume the major
percentage. Thus, it should be clear that the goal during coding should not be to
reduce the implementation cost, but the goal should be to reduce the cost of later
phases, even if it means that the cost of this phase has to increase. In other words, the
goal during this phase is not to simplify the job of the programmer. Rather, the goal
should be to simplify the job of the tester and the maintainer.

4. Testing or Debugging: Software testing is an investigation conducted to provide


stakeholders with information about the quality of the product or service under test.
Software testing also provides an objective, independent view of the software to allow
the business to appreciate and understand the risks at implementation of the software.
Test techniques include, but are not limited to, the process of executing a program or
application with the intent of finding software bugs.

Maintenance: A common perception of maintenance is that it is merely fixing bugs.


However, studies and surveys over the years have indicated that the majority, over 80%, of
the maintenance effort is used for non-corrective actions (Pigosky 1997). This perception is
perpetuated by users submitting problem reports that in reality are functionality
enhancements to the system. Software maintenance and evolution of systems was first
addressed by Meir M. Lehman in 1969. Over a period of twenty years, his research led to the
formulation of eight Laws of Evolution (Lehman 1997). Key findings of his research include
that maintenance is really evolutionary developments and that maintenance decisions are
aided by understanding what happens to systems (and software) over time. Lehman
demonstrated that systems continue to evolve over time. As they evolve, they grow more
complex unless some action such as code refactoring is taken to reduce the complexity.

13. What are the objectives of software engineering? What is SRS? What are functional and
non-functional requirements in software engineering? The basic goal of the requirement
activity is to get a SRS that has some desirable properties, explain these desirable properties?

Objectives of Software Engineering

Develop methods and procedures for software development that can scale up for large
systems and that can be used to consistently produce high-quality software at low cost and
with a small cycle time.

Therefore the key objectives are:

1. Consistency
2. Low cost

3. High Quality

4. Small cycle time

5. Scalability

The basic approach that software engineering takes is to separate the development process
from the developed product (i.e. software). Software engineering focuses on the process
with the belief that the quality of products developed using a process are influenced mainly
by the process.

Design of proper software process and their control is the primary goal of software
engineering.

It is the focus on process for producing the products that distinguishes it from most other
computing disciplines.

SRS – Software Requirements Specification

SRS is a document that completely describes what the proposed software should do without
describing how the software will do it.

Basic goal of the requirements phase is to produce the SRS, which describes the complete
behavior of the proposed software.

Need for SRS:

1. An SRS establishes the basis for agreement between the client and the supplier on
what the software product will do.

2. An SRS provides a reference for validation of the final product.

3. A high-quality SRS is a prerequisite to high-quality software.

4. A high-quality SRS reduces the development cost.

Functional and Non-Functional Requirements in software engineering


These are guidelines about different things an SRS should specify for completely specifying
the requirements.

Functional Requirements

Functional requirements specify which outputs should be produced from the given inputs.
They describe the relationship between the input and output of the system. For each
functional requirement, a detailed description of all the data inputs and their source, the
units of measure, and the range of valid inputs must be specified.

All the operations to be performed on the input data to obtain the output should be
specified. This includes specifying the validity checks on the input and output data,
parameters affected by the operation, and equations or other logical operations that must
be used to transform the inputs into corresponding outputs.

An important part of the specification is the system behavior in abnormal situations, like
invalid input or error during computation. The functional requirement must clearly state what
the system should do if such situations occur.

Behavior for situations where the input is valid but the normal operation cannot be
performed should also be specified. For example: an airline reservation system, where a
reservation cannot be made even for valid passengers if the airplane is fully booked.

Therefore, the system behavior for all foreseen inputs and all foreseen system states should
be specified.

Non-functional requirements

1. Performance Requirements

This part of an SRS specifies the performance constraints on the software system.

All the requirements relating to the performance characteristics of the system must be clearly
specified. There are two types of performance requirements: static and dynamic.
Static requirements are those that do not impose constraint on the execution characteristics
of the system. These include requirements like the number of simultaneous users to be
supported, and the number of files that the system has to process and their sizes. These are
also called capacity requirements of the system.

Dynamic requirements specify constraints on the execution behavior of the system. These
typically include response time and throughput constraints on the system. Response time is
the expected time for the completion of an operation under specified circumstances.
Throughput is the expected number of operations that can be performed in a unit time.

2. Design Constraints

There are a number of factors in the client's environment that may restrict the choices of a
designer. Such factors include standards that must be followed, resource limits, operating
environment, reliability and security requirements, and policies that may have an impact on
the design of the system.

An SRS should identify and specify all such constraints.

Standards Compliance: This specifies the requirements for the standards the system must
follow.

Hardware Limitations: The software may have to operate on some existing or predetermined
hardware, thus imposing restrictions on the design.

Reliability and Fault Tolerance: Fault tolerance requirements can place a major constraint on
how the system is to be designed. Recovery requirements are often an integral part here,
detailing what the system should do if some failure occurs to ensure certain properties.
Reliability requirements are very important for critical applications.
Security: Security requirements are particularly significant in defense systems and many
database systems.

3. External Interface Requirements

All the interactions of the software with people, hardware, and other software should be
clearly specified. For the user interface, the characteristics of each user interface of the
software product should be specified. A preliminary user manual should be created with all
user commands, screen formats, an explanation of how the system will appear to the user,
and feedback and error messages.

For hardware interface requirements, the SRS should specify the logical characteristics of
each interface between the software product and the hardware components. If the
software is to execute on existing hardware or on predetermined hardware, all the
characteristics of the hardware, including memory restrictions, should be specified. In
addition, the current use and load characteristics of the hardware should be given.

Desirable properties of an SRS

A good SRS is

1. Correct

2. Complete

3. Unambiguous

4. Verifiable

5. Consistent

6. Ranked for importance and/or stability

7. Modifiable

8. Traceable
1. An SRS is correct if every requirement included in the SRS represents something required in
the final system. Correctness ensures that which is specified is done correctly.

2. An SRS is complete if everything the software is supposed to do and the responses of the
software to all classes of input data are specified in the SRS. Completeness ensures that
everything is indeed specified.

3. An SRS is unambiguous if and only if every requirement stated has one and only one
interpretation. Requirements are often written in natural language, which are inherently
ambiguous.

4. An SRS is verifiable if and only if every stated requirement is verifiable. A requirement is


verifiable if there exists some cost-effective process that can check whether the final
software meets that requirement.

5. An SRS is consistent if there is no requirement that conflicts with another. Terminology can
cause inconsistencies; for example, different requirements may use different terms to refer
to the same object.

6. An SRS is ranked for importance and/or stability if for each requirement the importance
and the stability of the requirement are indicated. Stability of a requirement reflects the
chances of it changing in future.

7. An SRS is modifiable if its structure and style are such that any necessary change can be
made easily

while preserving completeness and consistency.

An SRS is traceable if the origin of each of its requirements is clear and if it facilitates the
referencing of each requirement in future development

14. What is SRS? Explain the DFD? What is structured analysis? Write a SRS for the following: a)
Student registration system. b) Library automation system.

SRS

IEEE defines a requirement as "(1) A condition of capability needed by a user to solve a


problem or achieve an objective; (2) A condition or a capability that must be met or
possessed by a system ... to satisfy a contract, standard, specification, or other formally
imposed document." Note that in software requirements we are dealing with the
requirements of the proposed system, that is, the capabilities that the system, which is yet to
be developed, should have. It is because we are dealing with specifying a system that does
not exist that the problem of requirements becomes complicated. The goal of the
requirements activity is to produce the Software Requirements Specification (SRS), that
describes what the proposed software should do without describing how the software will do
it. Producing the SRS is easier said than done. A basic limitation for this is that the user needs
keep changing as the environment in which the system is to function changes with time.
Even while accepting that some requirement change requests are inevitable, there are still
pressing reasons why a thorough job should be done in the requirements phase to produce
a high-quality and relatively stable SRS. Let us first look at some of these reasons.

Need for SRS

SRS establishes basis of agreement between the user and the supplier.

 Users needs have to be satisfied, but user may not understand software

 Developers will develop the system, but may not know about problem domain

 SRS is the medium to bridge the communication gap and specify user needs in a
manner both can understand

Helps user understand his needs.

 users do not always know their needs

 must analyze and understand the potential

 the goal is not just to automate a manual system, but also to add value through IT

 The requirement process helps clarify needs

SRS provides a reference for validation of the final product

 Clear understanding about what is expected.

 Validation – ― SW satisfies the SRS ‖

High quality SRS essential for high Quality SW

 Requirement errors get manifested in final SW

 to satisfy the quality objective, must begin with high quality SRS

 Requirements defects are not few


For Example: In A-7 project, following distribution was found, after the requirements phase is
over the requirement error that remain

 65% are detected during design

 2% are detected during coding

 30% are detected testing, and

 3% are detected during operation and Maintenance

Good SRS reduces the development cost

 SRS errors are expensive to fix later

 Requirement changes can cost a lot (up to 40%)

 Good SRS can minimize changes and errors

 Substantial savings; extra effort spent during requirement saves multiple times that
effort

Characteristics of an SRS

To properly satisfy the basic goals, an SRS should have certain properties and should contain
different types of requirements. In this section, we discuss some of the desirable
characteristics of an SRS and components of an SRS. A good SRS is :

1. Correct

2. Complete

3. Unambiguous

4. Verifiable

5. Consistent

6. Ranked for importance and/or stability

7. Modifiable

8. Traceable

Components of an SRS
Completeness of specifications is difficult to achieve and even more difficult to verify. Having
guidelines about what different things an SRS should specify will help in completely
specifying the requirements. Here we describe some of the system properties that an SRS
should specify. The basic issues an SRS must address are:

• Functionality

• Performance

• Design constraints imposed on an implementation

• External interfaces

Conceptually, any SRS should have these components. If the traditional approach to
requirement analysis is being followed, then the SRS might even have portions corresponding
to these. However, functional requirements might be specified indirectly by specifying the
services on the objects or by

specifying the use cases.

DFD

Data-flow based modeling, often referred to as the structured analysis technique, uses
function-based decomposition while modeling the problem. It focuses on the functions
performed in the problem domain and the data consumed and produced by these
functions. It is a top-down refinement approach, which was originally called structured
analysis and specification, and was proposed for producing the specifications. However, we
will limit our attention to the analysis aspect of the approach. Before we describe the
approach, let us the describe the data flow diagram and data dictionary on which the
technique relies heavily.

Data Flow Diagrams and Data Dictionary

Data flow diagrams (also called data flow graphs) are commonly used during problem
analysis. Data flow diagrams (DFDs) are quite general and are not limited to problem
analysis for software requirements specification. They were in use long before the software
engineering discipline began. DFDs are very useful in understanding a system and can be
effectively used during analysis. A DFD shows the flow of data through a system. It views a
system as a function that transforms the inputs into desired outputs. Any complex system will
not perform this transformation in a "single step," and a data will typically undergo a series of
transformations before it becomes the output. The DFD aims to capture the transformations
that take place within a system to the input data so that eventually the output data is
produced. The agent that performs the transformation of data from one state to another is
called a process (or a bubble). So, a DFD shows the movement of data through the different
transformations or processes in the system. The processes are shown by named circles and
data flows are represented by named arrows entering or leaving the bubbles. A rectangle
represents a source or sink and is a net originator or consumer of data. A source or a sink is
typically outside the main system of study.

Data Flow Modelling

 Widely used; focuses on functions performed in the system

 Views a system as a network of data transforms through which the data flows

 Uses data flow diagrams (also called data flow graphs or DFDs) and functional
decomposition in modelling

 The structured system analysis and design (SSAD) methodology uses DFD to organize
information, and guide analysis

Data flow diagrams

 A DFD shows flow of data through the system

 Views system as transforming inputs to outputs

 Transformation done through transforms

 DFD captures how transformation occurs from input to output as data moves through
the transforms

 Not limited to software

 Transforms represented by named circles/bubbles (process)

 Bubbles connected by arrows on which named data travels

 A rectangle represents a source or sink and is originator/consumer of data (often


outside the system)

 Focus on what transforms happen, how they are done is not important

 Usually major inputs/outputs shown, minor are ignored in this modelling

 No loops, conditional thinking, …


 DFD is NOT a control chart, no algorithmic design/thinking

 Sink/Source, external files

DFD Conventions

 External files shown as labelled straight lines

 Need for multiple data flows by a process represented by * (means and)

 OR relationship represented by +

 All processes and arrows should be named

 Processes should represent transforms, arrows should represent some data

Drawing a DFD for a system

 Identify inputs, outputs, sources, sinks for the system

 Work your way consistently from inputs to outputs, and identify a few high-level
transforms to capture full transformation

 If get stuck, reverse direction

 When high-level transforms defined, then refine each transform with more detailed
transformations

 Never show control logic; if thinking in terms of loops/decisions, stop & restart

 Label each arrows and bubbles; carefully identify inputs and outputs of each
transform

 Make use of + & *

 Try drawing alternate DFDs

Leveled DFDs

 DFD of a system may be very large

 Can organize it hierarchically

 Start with a top level DFD with a few bubbles

 then draw DFD for each bubble


 Preserve I/O when ― exploding‖ a bubble so consistency preserved

 Makes drawing the leveled DFD a top-down refinement process, and allows modeling
of large and complex systems

Data Dictionary

 In a DFD arrows are labeled with data items

 Data dictionary defines data flows in a DFD

 Shows structure of data; structure becomes more visible when exploding

 Can use regular expressions to express the structure of data

DFD drawing – common errors

 Unlabeled data flows

 Missing data flows

 Extraneous data flows

 Consistency not maintained during refinement

 Missing processes

 Too detailed or too abstract

 Contains some control information

FOR THE FOLLOWING PLEASE REFER TO THE GIVEN WEBSITES

SRS for Student Registration System -

http://www.scribd.com/doc/9321885/Online-University-Admission-System
SRS for Library Automation System -

http://www.scribd.com/doc/17337071/Srs-Library-Management-System

This must be noted that above two are detailed examples. In exam you need not write this
much explanation. Structure (point wise) should remain same.

15. We strive for the lowest possible coupling and high cohesion, while designing the
software, why? Explain why maximizing cohesion and minimizing coupling leads to more
maintainable systems? Which are the design attributes influence system more maintainable
and why?

16. Briefly bring out the difference between verification and validation. What is COCOMO
model? Describe its approach to estimate person months. Explain in detail at least one
software cost estimation technique other than COCOMO.

Difference between Verification & Validation

Verification is the process of determining whether or not the products of a given phase
of software development fulfill the specifications established during the previous phase.

Validation is the process of evaluating software at the end of the software


development to ensure compliance with the software requirements.

Software Verification:
 Confirm that you “built it the right way”.
 Provides objective evidence that the design outputs for a phase of the software
development lifecycle meet all of the specified requirements for that phase.
 Looks for consistency, completeness, and correctness of the software and supporting
documentation as it is being developed.

Software Validation:
 Confirm that you “built the right thing”.
 Provides objective evidence that the software is appropriate for its intended use and
will be reliable and safe.
 Ensures that all software requirements have been implemented correctly and
completely and are traceable to system requirements.

COCOMO: COCOMO or Constructive Cost Model is an algorithmic software cost


estimation model developed by Barry Boehm. The model uses a basic regression formula,
with parameters that are derived from historical project data and current project
characteristics.

In COCOMO, projects are categorized into three types:


1. Organic projects - "small" teams with "good" experience working with "less than rigid"
requirements
2. Semi-detached projects - "medium" teams with mixed experience working with a mix
of rigid and less than rigid requirements
3. Embedded projects - developed within a set of "tight" constraints (hardware,
software, operational, ...)

Estimation of person-months:

This model estimates the total effort in terms of person-months. The basic steps in this
model are:

1. COCOMO computes software development effort as function of program size and


a set of "cost drivers" that include subjective assessment of product, hardware, personnel
and project attributes.

There are 15 different attributes, called cost driver attributes that determine the
multiplying factors. These factors depend on product, computer, personnel, and technology
attributes (called project attributes). These factors are:

• Product attributes
– Required software reliability
– Size of application database
– Complexity of the product
• Hardware attributes
– Run-time performance constraints
– Memory constraints
– Volatility of the virtual machine environment
– Required turnabout time
• Personnel attributes
– Analyst capability
– Software engineering capability
– Applications experience
– Virtual machine experience
– Programming language experience
• Project attributes
– Use of software tools
– Application of software engineering methods
– Required development schedule

Each of the 15 attributes receives a rating on a six-point scale that ranges from "very
low" to "extra high" (in importance or value).

The multiplying factors for all 15 cost drivers are multiplied to get the effort adjustment
factor (EAF).

Equation for estimating person-months in COCOMO is given by:

E = ai (KLOC) bi. EAF

Where E is the effort applied in person-months, KLOC is the estimated number of


thousands of delivered lines of code for the project, and EAF is the factor calculated.

The coefficients ai and exponent bi depend on the project type and are given
in the following table.

Softwar a b
e project
Organic 3.2 1.05
Semi- 3.0 1.12
detached
Embed 2.8 1.20
ded

Function Points Model

The FP metric was originally developed as an alternative to SLOC to measure


productivity in the later stages of software development. However, Albrecht argued that the
FP model could also be a powerful tool to estimate software cost in the early stages of the
software development lifecycle. A detailed description of the software requirements is all
that is needed to conduct a complete FP analysis. This enables almost any member of a
software project team to conduct the FP analysis and not necessarily a team member who is
familiar with the details of software development.
Another important advantage of not making use of SLOC is that the estimate is
independent of the language and other implementation variables that are often difficult to
take into consideration. To accurately estimate SLOC, the programming language must be
considered because some languages are more concise than others. For example, an
estimate of the SLOC for a software project written in Java would undoubtedly differ from an
estimate of the same software in Assembly Language.

To properly compare the FP model to SLOC it is important to completely understand


how functions are counted, how the final FP count is calculated, and how to interpret the FP
count.

Counting Functions and the Calculating Unadjusted Function


Points
Even with the software requirements formally specified, it can be a challenge to get
started counting the functions of a software system. To simplify this process, Albrecht provides
fives categories of functions to count: external inputs, external outputs, external inquiries,
external interfaces and internal files.

External inputs consist of all the data entering the system from external sources and
triggering the processing of data. Fields of a form are not usually counted individually but a
data entry form would be counted as one external input.

External outputs consist of all the data processed by the system and sent outside the
system. Data that is printed on a screen or sent to a printer including a report, an error
message, and a data file is counted as an external output.

External inquiries are input and output requests that require an immediate response
and that do not change the internal data of the system. The process of looking up a
telephone number would be counted as one external inquiry.

External interfaces consist of all the data that is shared with other software systems
outside the system. Examples include shared files, shared databases, and software libraries.

Internal files include the logical data and control files internal to the system. An internal
file could be a data file containing addresses. A data file containing addresses and
accounting information could be counted as two internal files.

When a function is identified for a given category, the function’s complexity must also
be rated as low, average, or high as shown in Table.

Low Average High


External 3 4 6
Input
External 4 5 7
Output
Internal File 7 10 15
External 5 7 10
Interface
External 3 4 6
Inquiry

Table: Function Count Weighting Factors

Each function count is multiplied by the weight associated with its complexity and all
of the function counts are summed to obtain the count for the entire system, known as the
unadjusted function points (UFP). This calculation is summarized by the following equation:

∑∑

Where wij is the weight for row i, column j, and xij is the function count in cell i, j.

Calculating the Adjusted Function Points


Although UFP can give us a good idea of the number functions in a system, it doesn’t
take into account the environment variables for determining effort required to program the
system. For example, a software system that requires very high performance would require
additional effort to ensure that the software is written as efficiently as possible. Albrecht
recognized this when developing the FP model and created a list of fourteen ―general
system characteristics that are rated on a scale from 0 to 5 in terms of their likely effect for
the system being counted.‖ These characteristics are as follows:

1. Data communications
2. Distributed functions
3. Performance
4. Heavily used configuration
5. Transaction rate
6. Online data entry
7. End user efficiency
8. Online update
9. Complex processing
10. Reusability
11. Installation ease
12. Operational ease
13. Multiple sites
14. Facilitates change

The ratings given to each of the characteristics above ci are then entered into the
following formula to get the Value Adjustment Factor (VAF):

Where ci is the value of general system characteristic i, for 0 <= ci <= 5.

Finally, the UFP and VAF values are multiplied to produce the delivered FP (AFP) count:
DFP UFP VAF

Interpreting Adjusted Function Points


In practice, the final AFP number of the proposed system is compared against the AFP
count and cost of systems that have been measured in the past. The more historical data
that can be compared the better the chances of accurately estimating the cost of the
proposed software system.
To continuously refine estimation accuracy, it is essential that the actual cost is
measured and recorded once a system has been completed. It is this actual cost that
enables the evaluation of the initial estimate.

17. Explain in details all the activities under risk management paradigm? Explain the
importance of project staffing and different staff structures along with their advantages.
Explain in detail the various management activities in a software engineering project.

i) Activities under risk management paradigm are:


a) Risk identification: Risk identification is a systematic attempt to specify threats to
the project plan (estimates, schedule, resource loading, etc.). By identifying
known and predictable risks, the project manager takes a first step toward
avoiding them when possible and controlling them when necessary.
One method for identifying risks is to create a risk item checklist. The
checklist can be used for risk identification and focuses on some subset of
known and predictable risks in the following generic subcategories:
• Product size—risks associated with the overall size of the software to be
built or modified.
• Business impact—risks associated with constraints imposed by
management or the marketplace.
• Customer characteristics—risks associated with the sophistication of the
customer and the developer's ability to communicate with the customer in a
timely manner.
• Process definition—risks associated with the degree to which the
software process has been defined and is followed by the development
organization.
• Development environment—risks associated with the availability and
quality of the tools to be used to build the product.
• Technology to be built—risks associated with the complexity of the
system to be built and the "newness" of the technology that is packaged by the
system.
• Staff size and experience—risks associated with the overall technical
and project experience of the software engineers who will do the work.
The risk item checklist can be organized in different ways. Questions
relevant to each of the topics can be answered for each software project. The
answers to these questions allow the planner to estimate the impact of risk. A
different risk item checklist format simply lists characteristics that are relevant to
each generic subcategory. Finally, a set of ―risk components and drivers" are
listed along with their probability of occurrence.

b) Risk projection: Risk projection, also called risk estimation, attempts to rate each
risk in two ways—the likelihood or probability that the risk is real and the
consequences of the problems associated with the risk, should it occur. The
project planner, along with other managers and technical staff, performs four
risk projection activities: (1) establish a scale that
reflects the perceived likelihood of a risk, (2) delineate the consequences
of the risk, (3) estimate the impact of the risk on the project and the product,
and (4) note the overall accuracy of the risk projection so that there will be no
misunderstandings.

c) Risk refinement: During early stages of project planning, a risk may be stated
quite generally. As time passes and more is learned about the project and the
risk, it may be possible to refine the risk into a set of more detailed risks, each
somewhat easier to mitigate, monitor, and manage.
One way to do this is to represent the risk in condition-transition-
consequence (CTC) format. That is, the risk is stated in the following form:
Given that <condition> then there is concern that (possibly)
<consequence>.
This general condition can be refined in the following manner:
Subcondition 1. Certain reusable components were developed by a third
party with no knowledge of internal design standards.
Subcondition 2. The design standard for component interfaces has not
been solidified and may not conform to certain existing reusable components.
Subcondition 3. Certain reusable components have been implemented in
a language that is not supported on the target environment.
d) Risk mitigation, monitoring & management: All of the risk analysis activities
presented to this point has a single goal—to assist the project team in
developing a strategy for dealing with risk. An effective strategy must consider
three issues:
• Risk avoidance
• Risk monitoring
• Risk management and contingency planning

If a software team adopts a proactive approach to risk, avoidance is


always the best strategy. This is achieved by developing a plan for risk
mitigation.
As the project proceeds, risk monitoring activities commence. The project
manager monitors factors that may provide an indication of whether the risk is
becoming more or less likely.
In addition to monitoring these factors, the project manager should
monitor the effectiveness of risk mitigation steps.
Risk management and contingency planning assumes that mitigation
efforts have failed and that the risk has become a reality.

Note: I think it’s enough for this question.

Question ii) Explain the importance of project staffing and different staff structures
along with their advantages.

Answer: Once the effort is estimated, various schedules (or project duration) are
possible, depending on the number of resources (people) put on the project. For example,
for a project whose effort estimate is 56 person-months, a total schedule of 8 months is
possible with 7 people. A schedule of 7 months with 8 people is also possible, as is a schedule
of approximately 9 months with 6 people.
A schedule cannot be simply obtained from the overall effort estimate by deciding on
average staff size and then determining the total time requirement by dividing the total
effort by the average staff size. Brooks has pointed out that person and months (time) are
not interchangeable. According to Brooks, "... man and months are interchangeable only for
activities that require no communication among men, like sowing wheat or reaping cotton.
This is not even approximately true of software...."
Often, the staffing level is not changed continuously in a project and approximations
of the Rayleigh curve are used: assigning a few people at the start, having the peak team
during the coding phase, and then leaving a few people for integration and system testing.
For ease of scheduling, particularly for smaller projects, often the required people are
assigned together around the start of the project. This approach can lead to some people
being unoccupied at the start and toward the end. This slack time is often used for
supporting project activities like training and documentation.

(Note: I did not have the different staffing structures. If any1 gets pls add that.)

Question 17.iii) Explain in detail the various management activities in a software


engineering project.

Answer 17 iii) Proper management is an integral part of software development. A


large software development project involves many people working for a long period of time.
We have seen that a development process typically partitions the problem of developing
software into a set of phases. To meet the cost, quality, and schedule objectives, resources
have to be properly allocated to each activity for the project, and progress of different
activities has to be monitored and corrective actions taken, if needed. All these activities are
part of the project management process.
The project management process specifies all activities that need to be done by the
project management to ensure that cost and quality objectives are met. Its basic task is to
ensure that, once a development process is chosen, it is implemented optimally. The focus is
on issues like planning a project, estimating resource and schedule, and monitoring and
controlling the project. In other words, the basic task is to plan the detailed implementation
of the process for the particular project and then ensure that the plan is followed. For a large
project, a proper management process is essential for success.
The activities in the management process for a project can be grouped broadly into
three phases: planning, monitoring and control, and termination analysis. Project
management begins with planning, which is perhaps the most critical project management
activity. The goal of this phase is to develop a plan for software development following
which the objectives of the project can be met successfully and efficiently. A software plan
is usually produced before the development activity begins and is updated as development
proceeds and data about progress of the project becomes available.
During planning, the major activities are cost estimation, schedule and milestone
determination, project staffing, quality control plans, and controlling and monitoring plans.
Project planning is undoubtedly the single most important management activity, and it forms
the basis for monitoring and control.
Project monitoring and control phase of the management process is the longest in
terms of duration; it encompasses most of the development process.
It includes all activities the project management has to perform while the
development is going on to ensure that project objectives are met and the development
proceeds according to the developed plan (and update the plan, if needed). As cost,
schedule, and quality are the major driving forces, most of the activity of this phase revolves
around monitoring factors that affect these. Monitoring potential risks for the project, which
might prevent the project from meeting its objectives, is another important activity during this
phase. And if the information obtained by monitoring suggests that objectives may not be
met, necessary actions are taken in this phase by exerting suitable control on the
development activities.

A step in development process

Monitoring a development process requires proper information about the project.


Such information is typically obtained by the management process from the development
process. As shown earlier in Figure, the implementation of a development process model
should be such that each step in the development process produces information that the
management process needs for that step. That is, the development process provides the
information the management process needs. However, interpretation of the information is
part of monitoring and control.
Whereas monitoring and control last the entire duration of the project, the last phase
of the management process—termination analysis—is performed when the development
process is over. The basic reason for performing termination analysis is to provide information
about the development process and learn from the project in order to improve the process.
This phase is also often called postmortem analysis. In iterative development, this analysis
can be done after each iteration to provide feedback to improve the execution of further
iterations.

18. Differentiate between top down approach and bottom approach. Who should be
involved in a requirements review? Draw a process model showing how a requirements
review might be organized.

The top-down approach starts from the highest-level component of the hierarchy and
proceeds through to lower levels. By contrast, a bottom-up approach starts with the lowest-
level component of the hierarchy and proceeds through progressively higher levels to the
top-level component. A top-down design approach starts by identifying the major
components of the system, decomposing them into their lower-level components and
iterating until the desired level of detail is achieved. A bottom-up design approach starts
with designing the most basic or primitive components and proceeds to higher-level
components that use these lower-level components. Bottom-up methods work with layers of
abstraction. A top-down approach is suitable only if the specifications of the system are
clearly known and the system development is from scratch. However, if a system is to be built
from an existing system, a bottom-up approach is more suitable, as it starts from some
existing components. So, for example, if an iterative enhancement type of process is being
followed, in later iterations, the bottom-up approach could be more suitable.

The requirements review group should include the author of the requirements document,
someone who understands the needs of the client, a person of the design team, and the
person(s) responsible for maintaining the requirements document. It is also good practice to
include some people not directly involved with product development, like a software quality
engineer.

The following waterfall model shows how a requirements review might be organized.
19. Explain the use of design reviews in verifying a design. What is structure chart and how
are different types of modules represented in a structure chart? Illustrate with suitable
example. Which is the single attribute of software that allows a program to be intellectually
manageable and why, explain?

If the design is not specified in a formal, executable language, it cannot be processed through tools, and
other means for verification have to be used. The most common approach for verification is design review or
inspections. We discuss this approach here. The purpose of design reviews is to ensure that the design satisfies
the requirements and is of "good quality." If errors are made during the design process, they will ultimately
reflect themselves in the code and the final system. As the cost of removing faults caused by errors that occur
during design increases with the delay in detecting the errors, it is best if design errors are detected early, before
they manifest themselves in the system. Detecting errors in design is the purpose of design reviews.

For a function-oriented design, the design can be represented graphically by structure charts. The
structure of a program is made up of the modules of that program together with the interconnections between
modules. Every computer program has a structure, and given a program its structure can be determined. The
structure chart of a program is a graphic representation of its structure. In a structure chart a module is
represented by a box with the module name written in the box. An arrow from module A to module B
represents that module A invokes module B. B is called the subordinate of A, and A is called the superordinate
of B. The arrow is labeled by the parameters received by B as input and the parameters returned by B as output,
with the direction of flow of the input and output parameters represented by small arrows. The parameters can
be shown to be data (unfilled circle at the tail of the label) or control (filled circle at the tail).

The structure chart representation of the different types of modules is shown below:
As an example consider the structure of the following program:
Modularity is the single attribute of software that allows a program to be intellectually manageable.
Monolithic software (i.e., a large program composed of a single module) cannot be easily grasped by a reader.
The number of control paths, span of reference, number of variables, and overall complexity would make
understanding close to impossible. It enhances design clarity, which in turn eases implementation, debugging,
testing, documenting and maintainence of the software product. Modularity is where abstraction and
partitioning come together. For easily understandable and maintainable systems, modularity is clearly the basic
objective.

20. What is the importance of design in the software engineering? If some existing modules
are to be re-used in building a new system, will you use a top-down or bottom approach,
and why?

S-ar putea să vă placă și