Sunteți pe pagina 1din 19

Software

Engineering

Define software engineering.

Namdev S. Kacheboinwad | 9503712326


Ans: Definitions:
IEEE defines software engineering as:
 The application of a systematic, disciplined, quantifiable approach to the development,
operation and maintenance of software; that is, the application of engineering to software.
 The study of approaches as in the above statement.

Fritz Bauer, a German computer scientist, defines software engineering as:


 Software engineering is the establishment and use of sound engineering principles in order to
obtain economically software that is reliable and work efficiently on real machines.

Explain changing nature of software engineering.


The nature of software has changed a lot over the years.
The nature of software continues to evolve. The latest move, of course, is towards software
being delivered as a service, across the Internet, and ultimately, this will become the delivery
mechanism of choice - and the vast majority of software will no longer run locally. A bold statement,
but one many industry luminaries, including Sun's Scott "The network is the computer" McNealy, have
been predicting for years. And for years, the standard response has been, "Yeah, right." But it seems
the time has come. The Internet infrastructure has been developed to the point where it is fast and
secure, web browsers have become pervasive and useful as a standard client, and in the current tight
economy, companies (including software developers) are looking for cheaper ways to create and
deliver their goods. Already, much of today's e-business is conducted over applications that are
delivered to web clients; it's only a short leap to run standard productivity applications on the same
mechanism.
From a macro perspective, it makes sense. It costs an enormous amount of money for a
software vendor to create, maintain, and upgrade software for the dozens of different platforms they
must support. Software, when delivered as a service, bypasses this bottleneck, since the only thing
they have to support is a web browser. Another advantage - traditional software has an upgrade cycle,
whereby versions are released periodically, and customers must wait until the next release to see a
new feature, a fix, or a patch incorporated. In the newer "software as services" model, upgrades are
ongoing.
We're differentiating here between what's called "software as services," and standard ASPs. An
ASP delivers software to clients that is usually created by a third party, and is developed with the usual
traditional cycle of periodic upgrades. The software delivered by the ASP is often the same software as
the client could buy, install, and maintain in-house, the advantage the ASP offers is the convenience of

Namdev S. Kacheboinwad | 9503712326


outsourced management. Software-as-services, on the other hand, adds the advantage of continuous
upgrades. The ASP hosts traditional software that was meant to be used for a single client, and a
separate instance of the application is required for each client. Software-as-services, which is multi-
tenant capable, that is, a single iteration can service multiple clients. Either way, it's an inevitable
change from the traditional model of discrete software packages, installed and maintained in-house.
Ultimately, it's like electric utilities. It's of course impractical for everyone to try to generate their own
power, so instead, a single utility company creates a power plant and an infrastructure to deliver it,
taking the capital costs on themselves, and charging each customer a monthly fee, thereby dividing the
capital costs among millions of people. Ultimately, software will be delivered in the same manner, with
the capital costs of upgrading, maintaining, developing, and delivering software divided among all
users. It's the ultimate efficiency and a natural extension of e-business.

Explain software engineering processes

A process is a collection of activities, actions, and tasks that are performed when some work
product is to be created. A task focuses on a small, but well-defined objective. A process framework
establishes the foundation for a complete software engineering process. A generic process framework
for software engineering encompasses five activities: Communication, Planning, Modelling,
Construction, Development.

1) Communication
 Before any technical work can commence, it is critically important to communicate and
collaborate with the customer and other stakeholders
2) Planning
 Any complicated journey can be simplified if a map exists. A software project is a complicated
journey, and the planning activity creates a “map” that helps guide the team as it makes the
journey. The map—called a software project plan—defines the software engineering work by
describing the technical tasks to be conducted, the risks that are likely, the resources that will be
required.
3) Modelling
 Whether you’re a landscaper, a bridge builder, an aeronautical engineer, a carpenter, or an
architect, you work with models every day.

4) Construction
 This activity combines code generation (either manual or automated) and the testing that is
required to uncover errors in the code.

5) Development

Namdev S. Kacheboinwad | 9503712326


 The software (as a complete entity or as a partially completed increment) is delivered to the
customer who evaluates the delivered product and provides feedback based on the evaluation.
Software engineering process framework activities are complemented by a number of umbrella
activities. In general, umbrella activities are applied throughout a software project and help a software
team manage and control progress, quality, change, and risk. Typical umbrella activities include :
a) Software project tracking and control—
allows the software team to assess progress against the project plan and take any necessary action
to maintain the schedule.
b) Risk management—assesses risks that may affect the outcome of the project or the quality of the
product.
c) Software quality assurance—defines and conducts the activities required to ensure software
quality.
d) Technical reviews—assesses software engineering work products in an effort to uncover and
remove errors before they are propagated to the next activity.

e) Measurement—defines and collects process, project, and product measures that assist the team
in delivering software that meets stakeholders’ needs; can be used in conjunction with all other
framework and umbrella activities.
f) Software configuration management—manages the effects of change throughout the software
process.
g) Reusability management—defines criteria for work product reuse (including software
components) and establishes mechanisms to achieve reusable components.
h) Work product preparation and production—encompasses the activities required to create work
products such as models, documents, logs, forms, and lists.

Explain software myths


Software myths
Software myths—erroneous beliefs about software and the process that is used to build it—can
be traced to the earliest days of computing. Myths have a number of attributes that make them
insidious. For instance, they appear to be reasonable statements of fact (sometimes containing
elements of truth), they have an intuitive feel, and they are often promulgated by experienced
practitioners who “know the score.”

Management myths.
Managers with software responsibility, like managers in most disciplines, are often under
pressure to maintain budgets, keep schedules from slipping, and improve quality. Like a drowning
person who grasps at a straw, a software manager often grasps at belief in a software myth

Namdev S. Kacheboinwad | 9503712326


 Myth: We already have a book that’s full of standards and procedures for building
software. Won’t that provide my people with everything they need to know?
 Reality: The book of standards may very well exist, but is it used? Is software practitioners
aware of its existence? Does it reflect modern software engineering practice? Is it complete?
Is it adaptable? Is it streamlined to improve time-to-delivery while still maintaining a focus on
quality? In many cases, the answer to all of these questions is “no.”

Customer myths
A customer who requests computer software may be a person at the next desk, a technical
group down the hall, the marketing/sales department, or an outside company that has requested
software under contract. In many cases, the customer believes myths about software because software
managers and practitioners do little to correct misinformation. Myths lead to false expectations (by the
customer) and, ultimately, dissatisfaction with the developer.

 Myth: A general statement of objectives is sufficient to begin writing programs— we can


fill in the details later.
 Reality: Although a comprehensive and stable statement of requirements is not always
possible, an ambiguous “statement of objectives” is a recipe for disaster. Unambiguous
requirements (usually derived iteratively) are developed only through effective and
continuous communication between customer and developer.

Practitioners myth
Myths that are still believed by software practitioners have been fostered by over 50 years of
programming culture. During the early days, programming was viewed as an art form. Old ways
and attitudes die hard.

 Myth: Once we write the program and get it to work, our job is done.
 Reality: Someone once said that “the sooner you begin ‘writing code,’ the
longer it’ll take you to get done.” Industry data indicate that between 60 and 80 percent of
all effort expended on software will be expended after it is delivered to the customer for the first
time.

Explain Generic Process Models in Software Engineering


A software process is a collection of various activities.

There are five generic process framework activities:

Namdev S. Kacheboinwad | 9503712326


1. Communication: 
The software development starts with the communication between customer and
developer.

2. Planning: 
It consists of complete estimation, scheduling for project development and
tracking.

3. Modelling:
a) Modelling consists of complete requirement analysis and the design of the project like
algorithm, flowchart etc.
b) The algorithm is the step-by-step solution of the problem and the flow chart shows a
complete flow diagram of a program.

4. Construction:
a) Construction consists of code generation and the testing part.
b) Coding part implements the design details using an appropriate programming
language.
c) Testing is to check whether the flow of coding is correct or not.
d) Testing also check that the program provides desired output.

5. Deployment:
a) Deployment step consists of delivering the product to the customer and take
feedback from them.
b) If the customer wants some corrections or demands for the additional
capabilities, then the change is required for improvement in the quality of the
software.

Explain process assessment and improvement


The existence of a software process is no guarantee that software will be delivered on
time, that it will meet the customer’s needs, or that it will exhibit the technical
characteristics that will lead to long-term quality characteristics.

Process patterns must be coupled with solid software engineering practice in


addition, the process itself can be assessed to ensure that it meets a set of basic process
criteria that have been shown to be essential for a successful software engineering.

Namdev S. Kacheboinwad | 9503712326


A number of different approaches to software process assessment and
improvement have been proposed over the past few decades:

Standard CMMI Assessment Method for Process Improvement (SCAMPI)—


provides a five-step process assessment model that incorporates five phases: initiating,
diagnosing, establishing, acting, and learning. The SCAMPI method uses the SEI CMMI as
the basis for assessment [SEI00].

CMM-Based Appraisal for Internal Process Improvement (CBA IPI)— provides a


diagnostic technique for assessing the relative maturity of a software organization; uses
the SEI CMM as the basis for the assessment [Dun01].

SPICE (ISO/IEC15504)—a standard that defines a set of requirements for software


process assessment. The intent of the standard is to assist organizations in developing an
objective evaluation of the efficacy of any defined software process [ISO08].

ISO 9001:2000 for Software—a generic standard that applies to any organization that
wants to improve the overall quality of the products, systems, or services that it provides.
Therefore, the standard is directly applicable to software organizations and companies
[Ant06].

Defining a framework activity


The process of framework defines a small set of activities that are applicable to all
types of projects.
 The software process framework is a collection of task sets.
 Task sets consist of a collection of small work tasks, project milestones, work
productivity and software quality assurance points.

Namdev S. Kacheboinwad | 9503712326


Typical umbrella activities are:
1. Software project tracking and control
In this activity, the developing team accesses project plan and compares it with the
predefined schedule.
If these project plans do not match with the predefined schedule, then the required
actions are taken to maintain the schedule.

2. Risk management
 Risk is an event that may or may not occur.
 If the event occurs, then it causes some unwanted outcome. Hence, proper risk
management is required.

3. Software Quality Assurance (SQA)


 SQA is the planned and systematic pattern of activities which are required to give a
guarantee of software quality.
For example, during the software development meetings are conducted at every
stage of development to find out the defects and suggest improvements to produce
good quality software.

4. Formal Technical Reviews (FTR)

Namdev S. Kacheboinwad | 9503712326


 FTR is a meeting conducted by the technical staff.
 The motive of the meeting is to detect quality problems and suggest improvements.
 The technical person focuses on the quality of the software from the customer point
of view.

5. Measurement
 Measurement consists of the effort required to measure the software.
 The software cannot be measured directly. It is measured by direct and indirect
measures.
 Direct measures like cost, lines of code, size of software etc.
 Indirect measures such as quality of software which is measured by some other
factor. Hence, it is an indirect measure of software.

6. Software Configuration Management (SCM)


 It manages the effect of change throughout the software process.

7. Reusability management
 It defines the criteria for reuse the product.
 The quality of software is good when the components of the software are developed
for certain application and are useful for developing other applications.

8. Work product preparation and production


 It consists of the activities that are needed to create the documents, forms, lists, logs
and user manuals for developing a software

1.4 Process pattern

Process patterns can be defined as the set of activities, actions, work tasks or
work products and similar related behaviour followed in a software development life cycle.

Process patterns can be more easily understood by dividing it into terms, Process
which means the steps followed to achieve a task and patterns which means the
recurrence of same basic features during the lifecycle of a process. Thus in a more
universal term process patterns are common or general solution for a complexity.

Typical Examples are:

1. Customer communication (a process activity).

Namdev S. Kacheboinwad | 9503712326


2. Analysis (an action).
3. Requirements gathering (a process task).
4. Reviewing a work product (a process task).
5. Design model (a work product).

Process Patterns can be best seen in software design cycle which involves the common
stages of development. For example, a generic software design life cycles have following
steps:

1. Communication.
2. Planning.
3. Modelling which involves requirement gathering, analysis and design from business
perspective.
4. Development which involves code generation and testing.
5. Deployment includes the code deployment and testing in the production
environment.

Computer
system
security
Namdev S. Kacheboinwad | 9503712326
Define antivirus and write functions of antivirus
Antivirus
Antivirus software is a type of program designed and developed to protect computers from malware
like viruses, computer worms, spyware, botnets, rootkits, keyloggers and such. Antivirus programs
function to scan, detect and remove viruses from your computer.

Basic Functions of Antivirus Engines

Namdev S. Kacheboinwad | 9503712326


All antivirus engines have three components to function accordingly. It is important to have a
look at these functions because it will help us for better manual cleaning of viruses in case we
need.
 Scanning − When a new virus is detected in the cyberspace, antivirus producers start writing
programs (updates) that scans for similar signature strings.
 Integrity Checking − This method generally checks for manipulated files in OS from the viruses.
 Interception − This method is used basically to detect Trojans and it checks the request made by
the operating system for network access.

Firewall
Firewall defined
A firewall is a network security device that monitors incoming and outgoing network
traffic and permits or blocks data packets based on a set of security rules. Its purpose is to
establish a barrier between your internal network and incoming traffic from external sources
(such as the internet) in order to block malicious traffic like viruses and hackers.

How does a firewall work?


Firewalls carefully analyze incoming traffic based on pre-established rules and filter
traffic coming from unsecured or suspicious sources to prevent attacks. Firewalls guard traffic
at a computer’s entry point, called ports, which is where information is exchanged with external
devices. For example, “Source address 172.18.1.1 is allowed to reach destination 172.18.2.1
over port 22."

Think of IP addresses as houses, and port numbers as rooms within the house. Only
trusted people (source addresses) are allowed to enter the house (destination address) at all—
then it’s further filtered so that people within the house are only allowed to access certain
rooms (destination ports), depending on if they're the owner, a child, or a guest. The owner is
allowed to any room (any port), while children and guests are allowed into a certain set of
rooms (specific ports).

Types of firewalls
Firewalls can either be software or hardware, though it’s best to have both. A software
firewall is a program installed on each computer and regulates traffic through port numbers
and applications, while a physical firewall is a piece of equipment installed between your
network and gateway.

Namdev S. Kacheboinwad | 9503712326


Packet-filtering firewalls, the most common type of firewall, examine packets and
prohibit them from passing through if they don’t match an established security rule set. This
type of firewall checks the packet’s source and destination IP addresses. If packets match
those of an “allowed” rule on the firewall, then it is trusted to enter the network.

Packet-filtering firewalls are divided into two categories: stateful and stateless. Stateless
firewalls examine packets independently of one another and lack context, making them easy
targets for hackers. In contrast, stateful firewalls remember information about previously
passed packets and are considered much more secure.

While packet-filtering firewalls can be effective, they ultimately provide very basic
protection and can be very limited—for example, they can't determine if the contents of the
request that's being sent will adversely affect the application it's reaching. If a malicious
request that was allowed from a trusted source address would result in, say, the deletion of a
database, the firewall would have no way of knowing that. Next-generation firewalls and proxy
firewalls are more equipped to detect such threats.

Next-generation firewalls (NGFW) combine traditional firewall technology with


additional functionality, such as encrypted traffic inspection, intrusion prevention systems, anti-
virus, and more. Most notably, it includes deep packet inspection (DPI). While basic firewalls
only look at packet headers, deep packet inspection examines the data within the packet itself,
enabling users to more effectively identify, categorize, or stop packets with malicious
data. Learn about Forcepoint NGFW here.

Proxy firewalls filter network traffic at the application level. Unlike basic firewalls, the
proxy acts an intermediary between two end systems. The client must send a request to the
firewall, where it is then evaluated against a set of security rules and then permitted or
blocked. Most notably, proxy firewalls monitor traffic for layer 7 protocols such as HTTP and
FTP, and use both stateful and deep packet inspection to detect malicious traffic.

Network address translation (NAT) firewalls allow multiple devices with independent


network addresses to connect to the internet using a single IP address, keeping individual IP
addresses hidden. As a result, attackers scanning a network for IP addresses can't capture
specific details, providing greater security against attacks. NAT firewalls are similar to proxy
firewalls in that they act as an intermediary between a group of computers and outside traffic.

Stateful multilayer inspection (SMLI) firewalls filter packets at the network, transport, and application
layers, comparing them against known trusted packets. Like NGFW firewalls, SMLI also examine the
entire packet and only allow them to pass if they pass each layer individually. These firewalls examine
packets to determine the state of the communication (thus the name) to ensure all initiated
communication is only taking place with trusted sources.

Network security
Network security defined

Namdev S. Kacheboinwad | 9503712326


Network security is a broad term that covers a multitude of technologies, devices and
processes. In its simplest term, it is a set of rules and configurations designed to protect the
integrity, confidentiality and accessibility of computer networks and data using both software
and hardware technologies. Every organization, regardless of size, industry or infrastructure,
requires a degree of network security solutions in place to protect it from the ever-growing
landscape of cyber threats in the wild today.

How does network security work?


There are many layers to consider when addressing network security across an
organization. Attacks can happen at any layer in the network security layers model, so your
network security hardware, software and policies must be designed to address each area.

Network security typically consists of three different controls: physical, technical and
administrative. Here is a brief description of the different types of network security and how
each control works.

Physical Network Security

Physical security controls are designed to prevent unauthorized personnel from gaining
physical access to network components such as routers, cabling cupboards and so on.
Controlled access, such as locks, biometric authentication and other devices, is essential in
any organization.

Technical Network Security

Technical security controls protect data that is stored on the network or which is in transit
across, into or out of the network. Protection is twofold; it needs to protect data and systems
from unauthorized personnel, and it also needs to protect against malicious activities from
employees.

Administrative Network Security

Administrative security controls consist of security policies and processes that control user
behavior, including how users are authenticated, their level of access and also how IT staff
members implement changes to the infrastructure.

Computer system security policies .


Role of the Security Policy in Setting up Protocols

Namdev S. Kacheboinwad | 9503712326


Following are some pointers which help in setting u protocols for the security policy of an
organization.

 Who should have access to the system?


 How it should be configured?
 How to communicate with third parties or systems?
Policies are divided in two categories −

 User policies
 IT policies.
User policies generally define the limit of the users towards the computer resources in a
workplace. For example, what are they allowed to install in their computer, if they can use
removable storages.
Whereas, IT policies are designed for IT department, to secure the procedures and functions
of IT fields.
 General Policies − This is the policy which defines the rights of the staff and access level to the
systems. Generally, it is included even in the communication protocol as a preventive measure in
case there are any disasters.
 Server Policies − This defines who should have access to the specific server and with what rights.
Which software’s should be installed, level of access to internet, how they should be updated.
 Firewall Access and Configuration Policies − It defines who should have access to the firewall
and what type of access, like monitoring, rules change. Which ports and services should be allowed
and if it should be inbound or outbound.
 Backup Policies − It defines who is the responsible person for backup, what should be the backup,
where it should be backed up, how long it should be kept and the frequency of the backup.
 VPN Policies − These policies generally go with the firewall policy, it defines those users who
should have a VPN access and with what rights. For site-to-site connections with partners, it
defines the access level of the partner to your network, type of encryption to be set.

Structure of a Security Policy


When you compile a security policy you should have in mind a basic structure in order to
make something practical. Some of the main points which have to be taken into consideration
are −

 Description of the Policy and what is the usage for?


 Where this policy should be applied?
 Functions and responsibilities of the employees that are affected by this policy.
 Procedures that are involved in this policy.
 Consequences if the policy is not compatible with company standards.

Types of Policies

Namdev S. Kacheboinwad | 9503712326


In this section we will see the most important types of policies.
 Permissive Policy − It is a medium restriction policy where we as an administrator block just some
well-known ports of malware regarding internet access and just some exploits are taken in
consideration.

 Prudent Policy − This is a high restriction policy where everything is blocked regarding the internet
access, just a small list of websites are allowed, and now extra services are allowed in computers
to be installed and logs are maintained for every user.

 Acceptance User Policy − This policy regulates the behavior of the users towards a system or
network or even a webpage, so it is explicitly said what a user can do and cannot in a system. Like
are they allowed to share access codes, can they share resources, etc.

 User Account Policy − This policy defines what a user should do in order to have or maintain
another user in a specific system. For example, accessing an e-commerce webpage. To create this
policy, you should answer some questions such as −

o Should the password be complex or not?


o What age should the users have?
o Maximum allowed tries or fails to log in?
o When the user should be deleted, activated, blocked?
 Information Protection Policy − This policy is to regulate access to information, hot to process
information, how to store and how it should be transferred.

 Remote Access Policy − This policy is mainly for big companies where the user and their branches
are outside their headquarters. It tells what should the users access, when they can work and on
which software like SSH, VPN, RDP.

 Firewall Management Policy − This policy has explicitly to do with its management, which ports
should be blocked, what updates should be taken, how to make changes in the firewall, how long
should be the logs be kept.

 Special Access Policy − This policy is intended to keep people under control and monitor the
special privileges in their systems and the purpose as to why they have it. These employees can be
team leaders, managers, senior managers, system administrators, and such high designation
based people.

 Network Policy − This policy is to restrict the access of anyone towards the network resource and
make clear who all will access the network. It will also ensure whether that person should be
authenticated or not. This policy also includes other aspects like, who will authorize the new
devices that will be connected with network? The documentation of network changes. Web filters

Namdev S. Kacheboinwad | 9503712326


and the levels of access. Who should have wireless connection and the type of authentication,
validity of connection session?

 Email Usage Policy − This is one of the most important policies that should be done because many
users use the work email for personal purposes as well. As a result information can leak outside.
Some of the key points of this policy are the employees should know the importance of this system
that they have the privilege to use. They should not open any attachments that look suspicious.
Private and confidential data should not be sent via any encrypted email.

 Software Security Policy − This policy has to do with the software’s installed in the user computer
and what they should have. Some of the key points of this policy are Software of the company
should not be given to third parties. Only the white list of software’s should be allowed, no other
software’s should be installed in the computer. Warez and pirated software’s should not be allowed.

What is Encryption? Explain in brief.


Encryption is a transformed type of genuine information where only the authorized parties
know how to read it, so in the worst case scenario if somebody has access to these files they
would still not be able to understand the message in it.
The bases of encryption are since the ancient times. A good example is the pigeon couriers,
where the kings used to send messages to their commandants in the battle field in a specific
code, when the enemies caught them, they could not read them, just that the message was
lost, but if arrived at the destination commandant had the decryption vocabulary so they could
decrypt it.

Namdev S. Kacheboinwad | 9503712326


We should mention that encryption is for good or bad purpose. The bad case is the scenario
in which most of the malware files are in an encrypted form, so it cannot be read by everyone
accept the hacker.

Tools Used to Encrypt Documents


In this tutorial we will focus more on the practices than on the theoretical aspects for better
understanding. Let us discuss about some tools that we use to encrypt documents −
 Axcrypt − It is one of the best opensource encryption file softwares. It can be used in Windows OS,
Mac OS and Linux as well. This software can be downloaded from
− http://www.axantum.com/AxCrypt/Downloads.aspx
 GnuPG − This is an opensource software again and it can be integrated with other softwares too
(like email). It can be downloaded from − https://www.gnupg.org/download/index.html
 Windows BitLocker − It is a Windows integrated tool and its main functions is to secure and
encrypt all the hard disk volumes.
 FileVault − It is a Mac OS integrated tool and it secures as well as encrypts all the hard disk
volume.

Encryption Ways of Communication


System Administrators should use and offer to their staff a secure and encrypted channels of
communication and one of them is SSL (Secure Sockets Layer).This protocol helps to
establish a secure and encrypted connection between the clients and the servers. Generally,
it is used for Web Servers, Mail Servers, FTP servers.
Why do you need this?
If you have an online shop and your clients are using their credit card and their personal data
to purchase products from it. But they (Data) are at the risk to be stolen by a simple
wiretapping as the communication is in clear text, to prevent this, SSL Protocol will help to
encrypt this communication.
How to see if the communication is secure?
Browsers give visual cues, such as a lock icon or a green bar, to help visitors know when their
connection is secured. An example is shown in the following screenshot.

Another tool used by the system administrator is the SSH (Secure Shell). This is a secure
replacement for the telnet and other unencrypted utilities like rlogin, rcp, rsh.
It provides a secure channel encrypted in the communication host to host over internet. It
reduces the man-in-the-middle attacks. It can be downloaded from − http://www.putty.org/

Namdev S. Kacheboinwad | 9503712326


Namdev S. Kacheboinwad | 9503712326

S-ar putea să vă placă și