Sunteți pe pagina 1din 73

ABSTRACT

1
ABSTACT

Data sharing is an important functionality in cloud storage. This project show’s


how to securely, efficiently, and flexibly share data with others in cloud storage.
This concept describe new public-key cryptosystems that produce constant-size
cipher texts such that efficient delegation of decryption rights for any set of
ciphertexts is possible. The novelty is that one can aggregate any set of secret keys
and make them as compact as a single key, but encompassing the power of all the
keys being aggregated. In other words, the secret key holder can release a constant-
size aggregate key for flexible choices of ciphertext set in cloud storage, but the
other encrypted files outside the set remain confidential. This compact aggregate
key can be conveniently sent to others or be stored in a smart card with very
limited secure storage. This project also provides formal security analysis of our
schemes in the standard model. We also describe other application of our schemes.
In particular, our schemes give the first public-key patient-controlled encryption
for flexible hierarchy, which was yet to be known.

2
INTRODUCTION

1.INTRODUCTION
3
What is cloud computing?
Cloud computing is the use of computing resources (hardware and
software) that are delivered as a service over a network (typically the Internet). The
name comes from the common use of a cloud-shaped symbol as an abstraction for
the complex infrastructure it contains in system diagrams. Cloud computing
entrusts remote services with a user's data, software and computation. Cloud
computing consists of hardware and software resources made available on the
Internet as managed third-party services. These services typically provide access to
advanced software applications and high-end networks of server computers.

Structure of cloud computing


How Cloud Computing Works?
The goal of cloud computing is to apply traditional supercomputing, or high-
performance computing power, normally used by military and research facilities, to
perform tens of trillions of computations per second, in consumer-oriented
applications such as financial portfolios, to deliver personalized information, to
provide data storage or to power large, immersive computer games.
4
The cloud computing uses networks of large groups of servers typically running
low-cost consumer PC technology with specialized connections to spread data-
processing chores across them. This shared IT infrastructure contains large pools of
systems that are linked together. Often, virtualization techniques are used to
maximize the power of cloud computing.
Characteristics and Services Models:
The salient characteristics of cloud computing based on the definitions provided
by the National Institute of Standards and Terminology (NIST) are outlined below:
 On-demand self-service: A consumer can unilaterally provision computing
capabilities, such as server time and network storage, as needed
automatically without requiring human interaction with each service’s
provider.
 Broad network access: Capabilities are available over the network and
accessed through standard mechanisms that promote use by heterogeneous
thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).

 Resource pooling: The provider’s computing resources are pooled to serve


multiple consumers using a multi-tenant model, with different physical and
virtual resources dynamically assigned and reassigned according to
consumer demand. There is a sense of location-independence in that the
customer generally has no control or knowledge over the exact location of
the provided resources but may be able to specify location at a higher level
of abstraction (e.g., country, state, or data center). Examples of resources
include storage, processing, memory, network bandwidth, and virtual
machines.

 Rapid elasticity: Capabilities can be rapidly and elastically provisioned, in


some cases automatically, to quickly scale out and rapidly released to

5
quickly scale in. To the consumer, the capabilities available for provisioning
often appear to be unlimited and can be purchased in any quantity at any
time.

 Measured service: Cloud systems automatically control and optimize


resource use by leveraging a metering capability at some level of abstraction
appropriate to the type of service (e.g., storage, processing, bandwidth, and
active user accounts). Resource usage can be managed, controlled, and
reported providing transparency for both the provider and consumer of the
utilized service.

Characteristics of cloud computing


Services Models:
Cloud Computing comprises three different service models, namely
Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-
a-Service (SaaS). The three service models or layer are completed by an end user
layer that encapsulates the end user perspective on cloud services. The model is
shown in figure below. If a cloud user accesses services on the infrastructure layer,
for instance, she can run her own applications on the resources of a cloud

6
infrastructure and remain responsible for the support, maintenance, and security of
these applications herself. If she accesses a service on the application layer, these
tasks are normally taken care of by the cloud service provider.

Structure of service models

Benefits of cloud computing:


1. Achieve economies of scale – increase volume output or productivity with
fewer people. Your cost per unit, project or product plummets.
2. Reduce spending on technology infrastructure. Maintain easy access to
your information with minimal upfront spending. Pay as you go (weekly,
quarterly or yearly), based on demand.
3. Globalize your workforce on the cheap. People worldwide can access the
cloud, provided they have an Internet connection.
4. Streamline processes. Get more work done in less time with less people.
5. Reduce capital costs. There’s no need to spend big money on hardware,
software or licensing fees.
7
6. Improve accessibility. You have access anytime, anywhere, making your
life so much easier!
7. Monitor projects more effectively. Stay within budget and ahead of
completion cycle times.
8. Less personnel training is needed. It takes fewer people to do more work
on a cloud, with a minimal learning curve on hardware and software issues.
9. Minimize licensing new software. Stretch and grow without the need to
buy expensive software licenses or programs.
10.Improve flexibility. You can change direction without serious “people” or
“financial” issues at stake.
Advantages:
1. Price:Pay for only the resources used.
2. Security: Cloud instances are isolated in the network from other instances
for improved security.
3. Performance: Instances can be added instantly for improved performance.
Clients have access to the total resources of the Cloud’s core hardware.
4. Scalability: Auto-deploy cloud instances when needed.
5. Uptime: Uses multiple servers for maximum redundancies. In case of server
failure, instances can be automatically created on another server.
6. Control: Able to login from any location. Server snapshot and a software
library lets you deploy custom instances.

Traffic: Deals with spike in traffic with quick deployment of additional instances
to handle the load

8
SYSTEM ANALYSIS

9
SYSTEM ANALYSIS
2.1 EXISTING SYSTEM:
Considering data privacy, a traditional way to ensure it is to rely on the server to
enforce the access control after authentication, which means any unexpected
privilege escalation will expose all data. In a shared-tenancy cloud computing
environment, things become even worse.
Regarding availability of files, there are a series of cryptographic schemes
which go as far as allowing a third-party auditor to check the availability of files on
behalf of the data owner without leaking anything about the data, or without
compromising the data owners anonymity. Likewise, cloud users probably will not
hold the strong belief that the cloud server is doing a good job in terms of
confidentiality.
A cryptographic solution, with proven security relied on number-theoretic
assumptions is more desirable, whenever the user is not perfectly happy with
trusting the security of the VM or the honesty of the technical staff.
DISADVANTAGES OF EXISTING SYSTEM:
1. The costs and complexities involved generally increase with the number of
the decryption keys to be shared.
2. The encryption key and decryption key are different in publickey
encryption.
2.2 PROPOSED SYSTEM:

In this project, we study how to make a decryption key more powerful in the
sense that it allows decryption of multiple ciphertexts, without increasing its size.
Specifically, our problem statement is “To design an efficient public-key
encryption scheme which supports flexible delegation in the sense that any subset
of the ciphertexts (produced by the encryption scheme) is decry ptable by a
10
constant-size decryption key (generated by the owner of the master-secret key).”
We solve this problem by introducing a special type of public-key encryption
which we call key-aggregate cryptosystem (KAC). In KAC, users encrypt a
message not only under a public-key, but also under an identifier of ciphertext
called class. That means the ciphertexts are further categorized into different
classes. The key owner holds a master-secret called master-secret key, which can
be used to extract secret keys for different classes. More importantly, the extracted
key have can be an aggregate key which is as compact as a secret key for a single
class, but aggregates the power of many such keys, i.e., the decryption power for
any subset of ciphertext classes.
ADVANTAGES OF PROPOSED SYSTEM:
1. The extracted key have can be an aggregate key which is as compact as a
secret key for a single class.
2. The delegation of decryption can be efficiently implemented with the
aggregate key.

11
SYSTEM SPECIFICATION

&

MODULES

12
3. SYSTEM SPECIFICATION & MODULES

3.1 HARDWARE REQUIREMENTS

PROCESSOR : PENTIUM IV 2.6 GHz, Intel Core 2 Duo.

RAM : 2 GB DD RAM

MONITOR : 15” COLOR

HARD DISK : 40 GB

CDDRIVE : LG 52X

KEYBOARD : STANDARD 102 KEYS

MOUSE : 3 BUTTONS

3.2. SOFTWARE REQUIREMENTS

Operating system : Windows 07/ 08

IDE : Visual Studio 2010/12/13

Front End : C#.NET, ASP.NET

Database : SQL Server 2008/12

3.3. FUNCTIONAL REQUIREMENTS

A functional requirement defines a function of a software-system or its


component. A function is described as a set of inputs, the behavior, and outputs.
Our system requires minimum three systems to achieve this concept.

13
3.4. NON-FUNCTIONAL REQUIREMENTS :

Application provides effective resume search.


FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business proposal is
put forth with a very general plan for the project and some cost estimates.
During system analysis the feasibility study of the proposed system is to be
carried out. This is to ensure that the proposed system is not a burden to the
company. For feasibility analysis, some understanding of the major
requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the
research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the technical
requirements of the system. Any system developed must not have a high demand
on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the

14
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that
are employed to educate the user about the system and to make him familiar with
it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
IMPLEMENTATION

3.5 MODULES:
1. System Model
2. Key Generation
3. Encryption
4. Aggregate Key Transfer
MODULES DESCRIPTION:
System Model:
 Data Owner (Alice): In this module we executed by the data owner to setup
an account on an untrusted server. On input a security level parameter 1 λ and
the number of ciphertext classes n (i.e., class index should be an integer
bounded by 1 and n), it outputs the public system parameter param, which is
omitted from the input of the other algorithms for brevity.
 Network Storage: With our solution, Alice can simply send Bob a single
aggregate key via a secure e-mail. Bob can download the encrypted photos
from Alice’s Dropbox space and then use this aggregate key to decrypt these
encrypted photos. In this Network Storage is untrusted third party server.

15
Key Generation
 Public-key cryptography, also known as asymmetric cryptography, is a class
of cryptographic algorithms which requires two separate keys, one of which
is secret (or private) and one of which is public. Although different, the two
parts of this key pair are mathematically linked.
 The public key is used to encrypt plaintext whereas the private key is used
to decrypt ciphertext .Data owner to randomly generate a public/master-
secret key pair.
Encryption
 Encryption keys also come with two flavours symmetric key or asymmetric
(public) key. Using symmetric encryption, when Alice wants the data to be
originated from a third party, she has to give the encrypted her secret key;
obviously, this is not always desirable.
 By contrast, the encryption key and decryption key are different in public
key encryption. The use of public-key encryption gives more flexibility for
our applications.
Aggregate Key Transfer:
 A key-aggregate encryption scheme consists of five polynomial-time
algorithms as follows. The data owner establishes the public system
parameter via Setup and generates a public/master-secret key pair via
KeyGen.
 Messages can be encrypted via Encrypt by anyone who also decides what
ciphertext class is associated with the plaintext message to be encrypted. The
data owner can use the master-secret to generate an aggregate decryption
key for a set of ciphertext classes via Extract.
 The generated keys can be passed to delegates securely (via secure e-mails
or secure devices) finally; any user with an aggregate key can decrypt any

16
ciphertext provided that the ciphertext’s class is contained in the aggregate
key via Decrypt

17
SOFTWARE DESCRIPTION

4. SOFTWARE DESCRIPTION

GENERAL:

This chapter is about the software language and the tools used in the
development of the project. The platform used here is .NET. The Primary
languages are C#, VB, and J#. In this project C# is chosen for implementation.
18
4.1 DOTNET
4.1 .1INTRODUCTION TO DOTNET
Microsoft .NET is a set of Microsoft software technologies for rapidly building
and integrating XML Web services, Microsoft Windows-based applications, and
Web solutions. The .NET Framework is a language-neutral platform for writing
programs that can easily and securely interoperate. There’s no language barrier
with .NET: there are numerous languages available to the developer including
Managed C++, C#, Visual Basic and Java Script. The .NET framework provides
the foundation for components to interact seamlessly, whether locally or remotely
on different platforms. It standardizes common data types and communications
protocols so that components created in different languages can easily interoperate.
“.NET” is also the collective name given to various software components built
upon the .NET platform. These will be both products (Visual Studio.NET and
Windows.NET Server, for instance) and services (like Passport, .NET My Services,
and so on).
THE .NET FRAMEWORK
The .NET Framework has two main parts:
1. The Common Language Runtime (CLR).
2. A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the
environment within which programs run.
The most important features are
 Conversion from a low-level assembler-style language, called
Intermediate Language (IL), into code native to the platform being executed
on.
 Memory management, notably including garbage collection.
 Checking and enforcing security restrictions on the running code.
19
 Loading and executing programs, with version control and other such
features.
 The following features of the .NET framework are also worth
description
4.1.2MANAGE CODE
The code that targets .NET, and which contains certain extra Information -
“metadata” - to describe itself. Whilst both managed and unmanaged code can run
in the runtime, only managed code contains the information that allows the CLR to
guarantee, for instance, safe execution and interoperability.
4.1.3 MANAGE DATA
With Managed Code comes Managed Data. CLR provides memory allocation
and Deal location facilities, and garbage collection. Some .NET languages use
Managed Data by default, such as C#, Visual Basic.NET and JScript.NET, whereas
others, namely C++, do not. Targeting CLR can, depending on the language you’re
using, impose certain constraints on the features available. As with managed and
unmanaged code, one can have both managed and unmanaged data in .NET
applications - data that doesn’t get garbage collected but instead is looked after by
unmanaged code.

4.1.4COMMON TYPE SYSTEM (CTS)


The CLR uses something called the Common Type System (CTS) to strictly
enforce type-safety. This ensures that all classes are compatible with each other, by
describing types in a common way. CTS define how types work within the
runtime, which enables types in one language to interoperate with types in another
language, including cross-language exception handling. As well as ensuring that
types are only used in appropriate ways, the runtime also ensures that code doesn’t
attempt to access memory that hasn’t been allocated to it.
20
4.1.5 COMMON LANGUAGE SPECIFICATION
The CLR provides built-in support for language interoperability. To ensure that
you can develop managed code that can be fully used by developers using any
programming language, a set of language features and rules for using them called
the Common Language Specification (CLS) has been defined. Components that
follow these rules and expose only CLS features are considered CLS-compliant.
4.1.6 THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes, containing over 7000 types.
The root of the namespace is called System; this contains basic types like Byte,
Double, Boolean, and String, as well as Object. All objects derive from System.
Object. As well as objects, there are value types. Value types can be allocated on
the stack, which can provide useful flexibility. There are also efficient means of
converting value types to object types if and when necessary.
LANGUAGES SUPPORTED BY .NET
The multi-language capability of the .NET Framework and Visual Studio .NET
enables developers to use their existing programming skills to build all types of
applications and XML Web services. The .NET framework supports new versions
of Microsoft’s old favorites Visual Basic and C++ (as VB.NET and Managed C+
+), but there are also a number of new additions to the family.
Visual Basic .NET has been updated to include many new and improved
language features that make it a powerful object-oriented programming language.
These features include inheritance, interfaces, and overloading, among others.
Visual Basic also now supports structured exception handling, custom attributes
and also supports multi-threading. Visual Basic .NET is also CLS compliant, which
means that any CLS-compliant language can use the classes, objects, and
components you create in Visual Basic .NET.

21
C# is Microsoft’s new language. It’s a C-style language that is essentially “C++
for Rapid Application Development”. Unlike other languages, its specification is
just the grammar of the language. It has no standard library of its own, and instead
has been designed with the intention of using the .NET libraries as its own.
Microsoft Visual J# .NET provides the easiest transition for Java-language
developers into the world of XML Web Services and dramatically improves the
interoperability of Java-language programs with existing software written in a
variety of other programming languages.
Active State has created Visual Perl and Visual Python, which enable .NET-
aware applications to be built in either Perl or Python. Both products can be
integrated into the Visual Studio .NET environment. Visual Perl includes support
for Active State’s Perl Dev Kit.
ASP.NETXML WEB Windows Forms

.Net SERVICES Framework


Base Class Libraries
Common Language Runtime
Operating System

FEATURES OF C#

1. C# is a simple, modern, object oriented language derived from C++ and Java.
2. It aims to combine the high productivity of Visual Basic and the raw power of
C++.
3. It is a part of Microsoft Visual Studio7.0.
4. Visual studio supports Vb, VC++, C++, Vbscript, Jscript. All of these languages
provide access to the Microsoft .NET platform.
5. .NET includes a Common Execution engine and a rich class library.
6. Microsoft's JVM equiv. is Common language run time (CLR).
7. CLR accommodates more than one languages such as C#, VB.NET, Jscript,

22
ASP.NET, C++.
8. Source code --->Intermediate Language code (IL) ---> (JIT Compiler) Native
code.
9.The classes and data types are common to all of the .NET languages.
10. We may develop Console application, Windows application, and Web
application using C#.
11. In C# Microsoft has taken care of C++ problems such as Memory management,
pointers etc.
12.It supports garbage collection, automatic memory management and a lot.
MAIN FEATURES OF C#
1. Pointers are missing in C#.
2. Unsafe operations such as direct memory manipulation are not allowed.
3. In C# there is no usage of "::" or "->" operators.
4. Since it`s on .NET, it inherits the features of automatic memory management
and garbage collection.
5. Varying ranges of the primitive types like Integer, Floats etc.
6. Integer values of 0 and 1 are no longer accepted as Boolean values. Boolean
values are pure true or false values in C# so no more errors of "="operator and
"=="operator. "==" is used for comparison operation and "=" is used for
assignment operation.
TYPE SAFE :
1. In C# we cannot perform unsafe casts like convert double to a Boolean.
2. Value types (primitive types) are initialized to zeros and reference types (objects
and classes are initialized to null by the compiler automatically.
3. Arrays are zero base indexed and are bound checked.
4. Overflow of types can be checked.
INTEROPERABILITY:
23
1. C# includes native support for the COM and windows based applications.
2. Allowing restricted use of native pointers.
3. Users no longer have to explicitly implement the unknown and other COM
interfaces, those features are built in.
4. C# allows the users to use pointers as unsafe code blocks to manipulate your old
code.
Objectives of .NET Framework
1) Platform dependent

2) Language Independent

3) Language Interoperability

4) Security

5) Database Connectivity

6) Globalization of Application

1) Platform Independent: As dll or exe files are executable in any operating


system with the help of the CLR (common language runtime), hence .net is called
as platform independent.

2) Language Independent: As .net application logic can be developed in any .net


framework compatible languages, hence it is called as Language Independent.

Specification in ASP.net

It provides set of rules to be followed while integrating with the language.

Language Interoperability: The code written in one language should be used


from the application developed using other language.

24
Security: The .net applications attain high level of security.

Database Connectivity: A new Database connectivity model to connect


Database.

Globalization of Application: Designing the applications for supporting


multiple languages and cultures.

4.2 COMPONETS OF .NET FRMEWORK

The .NET Framework is an integral Windows component that supports building


and running the next generation of applications and XML Web services. The .NET
Framework is designed to fulfill the following objectives:
 To provide a consistent object-oriented programming environment whether
object code is stored and executed locally, executed locally but Internet-
distributed, or executed remotely.
 To provide a code-execution environment that minimizes software
deployment and versioning conflicts.
 To provide a code-execution environment that promotes safe execution of
code, including code created by an unknown or semi-trusted third party.
 To provide a code-execution environment that eliminates the performance
problems of scripted or interpreted environments.
 To make the developer experience consistent across widely varying types of
applications, such as Windows-based applications and Web-based
applications.
 To build all communication on industry standards to ensure that code based
on the .NET Framework can integrate with any other code.

25
The .NET Framework has two main components: the common language
runtime and the .NET Framework class library. In fact, the concept of code
management is a fundamental principle of the runtime. Code that targets the
runtime is known as managed code, while code that does not target the runtime is
known as unmanaged code. The class library, the other main component of the
.NET Framework, is a comprehensive, object-oriented collection of reusable types
that you can use to develop applications ranging from traditional command-line or
graphical user interface (GUI) applications to applications based on the latest
innovations provided by ASP.NET, such as Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the
common language runtime into their processes and initiate the execution of
managed code, thereby creating a software environment that can exploit both
managed and unmanaged features. The .NET Framework not only provides several
runtime hosts, but also supports the development of third-party runtime hosts.
For example, ASP.NET hosts the runtime to provide a scalable, server-side
environment for managed code. ASP.NET works directly with the runtime to
enable ASP.NET applications and XML Web services, both of which are discussed
later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the


runtime (in the form of a MIME type extension). Using Internet Explorer to host
the runtime enables you to embed managed components or Windows Forms
controls in HTML documents. Hosting the runtime in this way makes managed
mobile code (similar to Microsoft® ActiveX® controls) possible, but with
significant improvements that only managed code can offer, such as semi-trusted
execution and isolated file storage.

COMMON LANGUAGE RUNTIME:

26
The common language runtime manages memory, thread execution, code
execution, code safety verification, compilation, and other system services. These
features are intrinsic to the managed code that runs on the common language
runtime. With regards to security, managed components are awarded varying
degrees of trust, depending on a number of factors that include their origin (such as
the Internet, enterprise network, or local computer). This means that a managed
component might or might not be able to perform file-access operations, registry-
access operations, or other sensitive functions; even if it is being used in the same
active application. The runtime enforces code access security. For example, users
can trust that an executable embedded in a Web page can play an animation on
screen or sing a song, but cannot access their personal data, file system, or
network. The security features of the runtime thus enable legitimate Internet-
deployed software to be exceptionally feature rich.
The runtime also enforces code robustness by implementing a strict type-and-
code-verification infrastructure called the common type system (CTS). The CTS
ensures that all managed code is self-describing. The various Microsoft and third-
party language compilers generate managed code that conforms to the CTS.
BASE CLASS LIBRARY:
The .NET Framework class library is a collection of reusable types that tightly
integrate with the common language runtime. The class library is object oriented,
providing types from which your own managed code can derive functionality. This
not only makes the .NET Framework types easy to use, but also reduces the time
associated with learning new features of the .NET Framework. In addition, third-
party components can integrate seamlessly with classes in the .NET Framework.
For example, the .NET Framework collection classes implement a set of
interfaces that you can use to develop your own collection classes. Your collection
classes will blend seamlessly with the classes in the .NET Framework.
27
As you would expect from an object-oriented class library, the .NET
Framework types enable you to accomplish a range of common programming
tasks, and file access. In addition to these common tasks, the class library includes
types that support a variety of specialized development scenarios. For example,
you can use the .NET Framework to develop the following types of applications
and services:
 Console applications.
 Windows GUI applications (Windows Forms).

 ASP.NET applications.

 XML Web services.

 Windows services.

For example, the Windows Forms classes are a comprehensive set of reusable
types that vastly simplify Windows GUI development. If you write an ASP.NET
Web Form application, you can use the Web Forms classes.

FEATURES OF THE COMMON LANGUAGE RUNTIME:


Common Language Runtime is a heart of the .net framework. It actually
manages the code during Execution. The Code that runs under the CLR is called
“Managed Code”. The code that is executed under .net runtime gets benefits like
cross language inheritance, cross language exception handling, enhanced
Security, Versioning and development support, a simplified model for component
interaction, debugging and Profiling services.
FEATURES PROVIDED BY CLR:

28
Automatic memory management: - The CLR provides the Garbage
Collection feature for managing the life time of object. This relives a programmer
from memory management task.
Standard Type System: - The CLR Implement a formal Specification called
the Common Type System (CTS). CTS is important part of rules that ensures that
objects written in different language can interact with each other.
Language Interoperability: - it is the ability of an application to interact with
another application written in a different programming language. Language
interoperability helps maximum code reuse. The CLR provides support for
language interoperability by specifying and enforcing CTS and by providing
metadata.
Platform Independence: - The Compiler compiles code language, which is
CPU-independent. This means that the code can be executed from any platform
that supports the .Net CLR.
Security Management: - In .net platform, Security is achieved through the code
access Security (CAS) model. In the model, CLR enforces the restriction an
managed code through the object called “permissions”. The CLR allows the code
to perform only that task for which it has permissions. In other words, the CAS
model specifies what the code can access instead of specifies who can access
resources.
Type Safety: - This feature ensures that object is always accessed in
compatible ways. Therefore the CLR will prohibit a code from assign a 10-byte
value to an object that occupies 8 bytes.
BENEFITS OF CLR :
 Performance improvement
 The ability to easily use components developed in other languages.
 Extensible types provided by library.

29
 New Language features such as inheritance, interfaces etc.

 Complete Object-Oriented design.

 Very Strong Type Safety.

 A good blend of Visual Basic simplicity and c++ power.

 Syntax and keywords similar to c and c++.

 Use of delegates rather than function pointers for increased type safety and
security.
4.3ASP.NET OVERVIEW:
ASP.Net is a web development platform, which provides a programming model,
a comprehensive software infrastructure and various services required to build up
robust web application for PC, as well as mobile devices.
ASP.Net works on top of the HTTP protocol and uses the HTTP commands and
policies to set a browser-to-server two-way communication and cooperation.
ASP.Net is a part of Microsoft .Net platform. ASP.Net applications are complied
codes, written using the extensible and reusable components or objects present
in .Net framework. These codes can use the entire hierarchy of classes in .Net
framework.
The ASP.Net application codes could be written in either of the following
languages:
 C#
 Visual Basic .Net
 Jscript
 J#

30
ASP.Net is used to produce interactive, data-driven web applications over the
internet. It consists of a large number of controls like text boxes, buttons and labels
for assembling, configuring and manipulating code to create HTML pages.
ASP.Net Web Forms Model:
ASP.Net web forms extend the event-driven model of interaction to the web
applications. The browser submits a web form to the web server and the server
returns a full markup page or HTML page in response.
All client side user activities are forwarded to the server for stateful processing.
The server processes the output of the client actions and triggers the reactions.
Now, HTTP is a stateless protocol. ASP.Net framework helps in storing the
information regarding the state of the application, which consists of:

 Page state
 Session state
The page state is the state of the client, i.e., the content of various input fields in
the web form. The session state is the collective obtained from various pages the
user visited and worked with, i.e., the overall session state. To clear the concept, let
us take up an example of a shopping cart as follows.
The ASP.Net runtime carries the page state to and from the server across page
requests while generating the ASP.Net runtime codes and incorporates the state of
the server side components in hidden fields.
ASP.Net Component Model:
The ASP.Net component model provides various building blocks of ASP.Net pages.
Basically it is an object model, which describes:
 Server side counterparts of almost all HTML elements or tags, like
<form> and <input>.
 Server controls, which help in developing complex user-interface for
example the Calendar control or the Gridview control.

31
ASP.Net is a technology, which works on the .Net framework that contains all
web-related functionalities. The .Net framework is made of an object-oriented
hierarchy. An ASP.Net web application is made of pages. When a user requests an
ASP.Net page, the IIS delegates the processing of the page to the ASP.Net runtime
system.
4.3.1 ASP.NET ARCHITECTURE
ASP.NET is based on the fundamental architecture of .NET Framework. Visual
studio provide a uniform way to combine the various features of this Architecture.

Architecture is explained form bottom to top in the following discussion.

 At the bottom of the Architecture is Common Language Runtime. NET


Framework common language runtime resides on top of the operating
system services. The common language runtime loads and executes code
that targets the runtime. This code is therefore called managed code. The
runtime gives you, for example, the ability for cross-language integration.
 .NET Framework provides a rich set of class libraries. These include base
classes, like networking and input/output classes, a data class library for
data access, and classes for use by programming tools, such as debugging
services. All of them are brought together by the Services Framework,
which sits on top of the common language runtime.

32
 ADO.NET is Microsoft’s ActiveX Data Object (ADO) model for the .NET
Framework. ADO.NET is not simply the migration of the popular ADO
model to the managed environment but a completely new paradigm for data
accessand manipulation.

 ADO.NET is intended specifically for developing web applications. This is


evident from its two major design principles:

 Disconnected Datasets—In ADO.NET, almost all data manipulation is done


outside the context of an open database connection.

 Effortless Data Exchange with XML—Datasets can converse in the


universal data format of the Web, namely XML.

 The Web application model-in the slide presented as ASP.NET-includes


Web Form and Web Services.

 ASP.NET comes with built-in Web Forms controls, which are responsible
for generating the user interface. They mirror typical HTML widgets like
text boxes or buttons. One of the obvious themes of .NET is unification and
interoperability between various programming languages. In other words
we cannot have languages running around creating their own extensions
and their own fancy new data types. CLS is the collection of the rules and
constraints that every language (that seeks to achieve .NET compatibility)
must follow.

 The CLR and the .NET Frameworks in general, however, are designed in
such a way that code written in one language can not only seamlessly be
used by another language. Hence ASP.NET can be programmed in any of

33
the .NET compatible language whether it is VB.NET, C#, Managed C++ or
JScript.NET.
4.3.2 ASP.NET PAGE LIFE CYCLE
ASP.Net life cycle specifies, how:
 ASP.Net processes pages to produce dynamic output
 The application and its pages are instantiated and processed
 ASP.Net compiles the pages dynamically
The ASP.Net life cycle could be divided into two groups:
 Application Life Cycle
 Page Life Cycle
ASP.Net Application Life Cycle:
 User makes a request for accessing application resource, a page. Browser
sends this request to the web server.
 A unified pipeline receives the first request and the following events take
place:
 An object of the ApplicationManager class is created.
 An object of the HostingEnvironment class is created to provide
information regarding the resources.
 Top level items in the application are compiled.
 Response objects are created .the application objects: HttpContext,
HttpRequest and HttpResponse are created and initialized.
ASP.Net Page Life Cycle:
When a page is requested, it is loaded into the server memory, processed and
sent to the browser. Then it is unloaded from the memory. The Page class creates a
hierarchical tree of all the controls on the page. All the components on the page,
except the directives are part of this control tree. You can see the control tree by

34
adding trace= "true" to the Page directive. We will cover page directives and
tracing under 'directives' and 'error handling'.
The page life cycle phases are:
 Initialization
 Instantiation of the controls on the page
 Restoration and maintenance of the state
 Execution of the event handler codes
Understanding the page cycle helps in writing codes for making some specific
thing happen at any stage of the page life cycle. It also helps in writing custom
controls and initializing them at right time, populate their properties with view-
state data and run control behavior code.
Following are the different stages of an ASP.Net page:
 Page request. when ASP.Net gets a page request, it decides whether to
parse and compile the page or there would be a cached version of the
page; accordingly the response is sent
 Starting of page life cycle. at this stage, the Request and Response
objects are set. the IsPostBack property of the page is set to true. The
UICulture property of the page is also set.
 Page initialization. At this stage, the controls on the page are assigned
unique ID by setting the UniqueID property and themes are applied. For a
new request postback data is loaded and the control properties are restored
to the view-state values.
 Page load. At this stage, control properties are set using the view state
and control state values.
 Validation. Validate method of the validation control is called and if it
runs successfully, the IsValid property of the page is set to true.

35
 Postback event handling. If the request is a postback (old request), the
related event handler is called.
 Page rendering. At this stage, view state for the page and all controls are
saved. The page calls the Render method for each control and the output
of rendering is written to the OutputStream class of the Page's Response
property.
 Unload. The rendered page is sent to the client and page properties, such
as Response and Request are unloaded and all cleanup done.
ASP.Net Page Life Cycle Events:
 At each stage of the page life cycle, the page raises some events, which
could be coded. An event handler is basically a function or subroutine,
bound to the event, using declarative attributes like onclick or handle.
 PreInit. PreInit is the first event in page life cycle. It checks the
IsPostBack property and determines whether the page is a postback. It sets
the themes and master pages, creates dynamic controls and gets and sets
profile.
 Init . Init event initializes the control property and the control tree is built.
This event can be handled by overloading the OnInit method or creating a
Page_Init handler.
 InitComplete . InitComplete event allows tracking of view state. All the
controls turn on view-state tracking.
 LoadViewState. LoadViewState event allows loading view state
information into the controls.
 LoadPostData. During this phase, the contents of all the input fields
defined with the <form> tag are processed.

36
 Preload. Preload occurs before the post back data is loaded in the
controls. This event can be handled by overloading the On Preload
method or creating a Page_PreLoad handler.
 Load. The Load event is raised for the page first and then recursively for
all child controls. The controls in the control tree are created. This event
can be handled by overloading the On Load method or creating a Page
Load handler.
 Load Complete. The loading process is completed, control event handlers
are run and page validation takes place. This event can be handled by
overloading the OnLoadComplete method or creating a
Page_LoadComplete handler.
 PreRender. The PreRender event occurs just before the output is
rendered. By handling this event, pages and controls can perform any
updates before the output is rendered.
 PreRenderComplete. As the PreRender event is recursively fired for all
child controls, this event ensures the completion of the pre-rendering
phase.
 SaveStateComplete. State of control on the page is saved.
Personalization, control state and view state information is saved.
 Unload. The Unload phase is the last phase of the page life cycle. It raises
the Unload event for all controls recursively and lastly for the page itself.
Final cleanup is done and all resources and references, such as database
connections, are freed. This event can be handled by modifying the On
Unload method or creating a Page_UnLoad handler.
An ASP.Net page is made of number of server controls along with the HTML
controls, text and images. Sensitive data from the page and the states of different

37
controls on the page are stored in hidden fields and forms the context of that page
request.
ASP.Net runtime controls all association between a page instance and its state. An
ASP.Net page is an object of the Page Class or inherited from it.
An ASP.Net page is also a server side file saved with the .aspx extension. It is
modular in nature and can be divided into the following core sections:
 Page directives
 Code Section
 Page Layout
 Page directives
The page directives set up the environments for the page to run. The @Page
directive defines page-specific attributes used by the ASP.Net page parser and
compiler. Page directives specify how the page should be processed, and which
assumptions are to be taken about the page.
Code Section:
The code section provides the handlers for the page and control events along
with other functions required. We mentioned that, ASP.Net follows an object
model. Now, these objects raises events when something happens on the user
interface, like a user clicks a button or moves the cursor. How these events should
be handled? That code is provided in the event handlers of the controls, which are
nothing but functions bound to the controls.
The code section or the code behind file provides all these event handler
routines, and other functions used by the developer. The page code could be
precompiled and deployed in the form of a binary assembly.
Page Layout:
 The page layout provides the interface of the page. It contains the server
controls, text, inline JavaScript and HTML tags:.
38
4.4 SQL SERVER 2008:
SQL Server 2005 will be soon reaching its three-year mark, which in terms of
software life-cycle translates into fairly advanced maturity. While this is still far
from retirement age, the name of its successor, SQL Server 2008, suggests that it
might be time for you to start looking into what the new generation has to offer.
The release of SQL Server 2008, originally introduced as Yukon, has already been
postponed, but its current Beta 2 implementation .In this series of articles, we will
look into functional highlights of the new incarnation of the Microsoft database
management system, focusing on those that are likely to remain unchanged in the
final product.
SQL Server Standard Edition - offering the most diverse set of features and
intended for the majority of clients.
 SQL Server 2008 Express Edition - serving as the replacement for Microsoft
Data Engine (MSDE) and available for download from t. Like its
predecessor, it was designed with developers in mind, however, unlike the
previous version, it also includes a Web based management interface.
 SQL Server 2008 Mobile Edition - as a successor to SQL Server 2008
Windows CE Edition, it is intended for Windows mobile-based devices, such
as Tablet PCs, Pocket PCs, and Smart phones

4.4.1 FEATURES OF SQL SERVER:


Microsoft SQL Server 2008:
The following is a list of the new features provided in SQL Server 2008:

39
 Database mirroring
 Database snapshots
 CLR integration
 Service Broker
 Database Mail
 User-defined functions
 Indexed views
 Distributed partitioned views
 INSTEAD OF and AFTER triggers
 New data types
 Cascading RI constraints

 Multiple SQL Server instances

 XML support

 Log shipping
Database mirroring:
Database mirroring is a new high-availability feature in SQL Server 2008. It's
similar to server clustering in that failover is achieved by the use of a stand-by
server; the difference is that the failover is at the database level rather than the
server level. The primary database continuously sends transaction logs to the
backup database on a separate SQL Server instance. A third SQL Server instance is
then used as a witness database to monitor the interaction between the primary and
the mirror databases.

Database snapshots:
A database snapshot is essentially an instant read-only copy of a database, and it
is a great candidate for any type of reporting solution for your company. In
addition to being a great reporting tool, you can revert control from your primary

40
database to your snapshot database in the event of an error. The only data loss
would be from the point of creation of the database snapshot to the event of failure.
CLR integration:
With SQL Server 2008, you now have the ability to create custom .NET objects
with the database engine. For example, stored procedures, triggers, and functions
can now be created using familiar .NET languages such as VB and C#. Exposing
this functionality gives you tools that you never had access to before such as
regular expressions.
Service Broker:
This feature gives you the ability to create asynchronous, message-based
applications in the database entirely through TSQL. The database engine
guarantees message delivery, message order consistency, and handles message
grouping. In addition, Service Broker gives you the ability to send messages
between different SQL Server instances. Server Broker is also used in several other
features in SQL Server 2008.
Database Mail:
Database Mail, the eventual successor to SQL Mail, is greatly enhanced e-mail
solution available in the database engine. Database Mail uses standard SMTP to
send e-mail messages. These messages may contain query results, attachments
(which can be governed by the DBA), and is fully cluster aware. In addition, the e-
mail process runs outside of the database engine space, which means that messages
can continue to be queued even when the database engine has stopped.

User-Defined Functions:
SQL Server has always provided the ability to store and execute SQL code
routines via stored procedures. In addition, SQL Server has always supplied a
number of built-in functions. Functions can be used almost anywhere an
expression can be specified in a query. This was one of the shortcomings of stored
procedures—they couldn't be used in line in queries in select lists, where clauses,
41
and so on. Perhaps you want to write a routine to calculate the last business day of
the month. If only you could write your own function that you could use directly in
the query just like a system function. In SQL Server 2008, you can.
SQL Server 2008 introduces the long-awaited support for user-defined
functions. User-defined functions can take zero or more input parameters and
return a single value—either a scalar value like the system-defined functions, or a
table result. Table-valued functions can be used anywhere table or view
expressions can be used in queries, and they can perform more complex logic than
is allowed in a view.
Indexed Views:
Views are often used to simplify complex queries, and they can contain joins
and aggregate functions. However, in the past, queries against views were resolved
to queries against the underlying base tables, and any aggregates were recalculated
each time you ran a query against the view. In SQL Server 2008 Enterprise or
Developer Edition, you can define indexes on views to improve query performance
against the view. When creating an index on a view, the result set of the view is
stored and indexed in the database. Existing applications can take advantage of the
performance improvements without needing to be modified.

Indexed views can improve performance for the following types of queries:
 Joins and aggregations that process many rows.
 Join and aggregation operations that are performed frequently within many
queries.

42
 Decision support queries that rely on summarized, aggregated data that is
infrequently updated.
Distributed Partitioned Views:
SQL Server 7.0 provided the ability to create partitioned views using the
UNION ALL statement in a view definition. It was limited, however, in that all the
tables had to reside within the same SQL Server where the view was defined. SQL
Server 2008 expands the ability to create partitioned views by allowing you to
horizontally partition tables across multiple SQL Servers. The feature helps you
scale out one database server to multiple database servers, while making the data
appear as if it comes from a single table on a single SQL Server. In addition,
partitioned views are now able to be updated.
INSTEAD OF and AFTER Triggers:
In versions of SQL Server prior to 7.0, a table could not have more than one
trigger defined for INSERT, UPDATE, and DELETE. These triggers only fired
after the data modification took place. SQL Server 7.0 introduced the ability to
define multiple AFTER triggers for the same operation on a table. SQL Server
2008 extends this capability by providing the ability to define which AFTER
trigger fires first and which fires last.
New Data types
SQL Server 2008 introduces three new data types. Two of these can be used as
datatypes for local variables, stored procedure parameters and return values, user-
defined function parameters and return values, or table columns:
 bigint—An 8-byte integer that can store values from –
263(9223372036854775808) through 263-1 (9223372036854775807).
 sql_variant—A variable-sized column that can store values of various
SQL Server-supported data types, with the exception of text, ntext,
timestamp, and sql_variant.
43
The third new datatype, the table datatype, can be used only as a local variable
datatype within functions, nor can it be used as a column datatype. A variable
defined with the table datatype can be used to store a result set for later processing.
A table variable can be used in queries anywhere a table can be specified.
Text in Row Data:
In previous versions of SQL Server, text and image data was always stored on a
separate page chain from where the actual data row resided. The data row
contained only a pointer to the text or image page chain, regardless of the size of
the text or image data. SQL Server 2008 provides a new text in row table option
that allows small text and image data values to be placed directly in the data row,
instead of requiring a separate data page. This can reduce the amount of space
required to store small text and image data values, as well as reduce the amount of
I/O required to retrieve rows containing small text and image data values.
Cascading RI Constraints:
In previous versions of SQL Server, referential integrity (RI) constraints were
restrictive only. If an insert, update, or delete operation violated referential
integrity, it was aborted with an error message. SQL Server 2008 provides the
ability to specify the action to take when a column referenced by a foreign key
constraint is updated or deleted.
Multiple SQL Server Instances:
Previous versions of SQL Server supported the running of only a single instance
of SQL Server at a time on a computer. Running multiple instances or multiple
versions of SQL Server required switching back and forth between the different
instances, requiring changes in the Windows registry. (The SQL Server Switch
provided with 7.0 to switch between 7.0 and 6.5 performed the registry changes for
you.)

44
SQL Server 2005 provides support for running multiple instances of SQL
Server on the same system. This allows you to simultaneously run one instance of
SQL Server 6.5 or 7.0 along with one or more instances of SQL Server 2008. Each
SQL Server instance runs independently of the others and has its own set of system
and user databases, security configuration, and so on. Applications can connect to
the different instances in the same way they connect to different SQL Servers on
different machines.
XML Support:
Extensible Markup Language has become a standard in Web-related
programming to describe the contents of a set of data and how the data should be
output or displayed on a Web page. XML, like HTML, is derived from the
Standard Generalize Markup Language (SGML). When linking a Web application
to SQL Server, a translation needs to take place from the result set returned from
SQL Server to a format that can be understood and displayed by a Web application.
Previously, this translation needed to be done in a client application.
SQL Server 2008 provides native support for XML. This new feature provides the
ability to do the following:
 Return query result sets directly in XML format.
 Retrieve data from an XML document as if it were a SQL Server table.

 Access SQL Server through a URL using HTTP. Through Internet


Information Services (IIS), you can define a virtual root that gives you
HTTP access to the data and XML functionality of SQL Server 2008.

45
Log Shipping:
The Enterprise Edition of SQL Server 2008 now supports log shipping,
which you can use to copy and load transaction log backups from one database to
one or more databases on a constant basis. Additionally, log shipping provides a
way to offload read-only query processing from the primary database to the
destination database.
This capability was available in previous versions of SQL Server, but it
required the DBA to manually set up the process and schedule the jobs to copy and
restore the log backups. SQL Server 2008 officially supports log shipping and has
made it easier to set up via the Database Maintenance Plan Wizard.
DDL triggers:
In previous articles, I outlined how you can use data definition language (DDL)
triggers in SQL Server 2008 to implement custom database and server auditing
solutions for Sarbanes-Oxley compliance (here are part one and part two of my
SOX articles). DDL triggers are defined at the server or database level and fire
when DDL statements occur. This gives you the ability to audit when new tables,
stored procedures, or logins are created.
Ranking functions:
SQL Server 2008 provides you with the ability to rank result sets returned from the
database engine. This allows you to customize the manner in which result sets are
returned, such as creating customized paging functions for Web site data.
Row versioning-based isolation levels:
This new database engine feature improves database read concurrency by reducing
the amount of locks being used in your database. There are two versions of this
feature (both of which must be enabled at the database level).

46
Read Committed Isolation Using Row Versioning:
The individual statement level, and guarantees that the data is consistent
for the a is locks begin duration of the statement.
Snapshot Isolation:
Is used at the transaction level, and guarantees that the data is consistent for
the duration of the transaction. The database engine is able to guarantee the
consistency through row versions stored in the temped database. read operations
accessing the same data that is being involved in a transaction will read from the
previous version of the data that is stored in temped.

47
SYSTEM DESIGN

48
5. SYSTEM DESIGN
GENERAL
Design Engineering deals with the various UML [Unified Modeling language]
diagrams for the implementation of project. Design is a meaningful engineering
representation of a thing that is to be built. Design is the place where quality is
rendered in software engineering. Design is the means to accurately translate
customer requirements into finished product.
5.1 SYSTEM ARCHITECTURE
The proposed architecture in this research is designed to text mine the given
data set in an efficient manner .It is designed and implemented in such a way that it
retrieves requested data and relevant data sets. The user search the query to the
database through the keyword extraction from the text to retrieval concept in this
architecture.
Architecture diagram shows the relationship between different components of
system. This diagram is very important to understand the overall concept of
system. Architecture diagram is a diagram of a system, in which the principal parts
or functions are represented by blocks connected by lines that show the
relationships of the blocks. They are heavily used in the engineering world
in hardware design, electronic design, software design, and process flow diagrams.

49
SYSTEM DESIGN

SYSTEM ARCHITECTURE:

50
BLOCK DIAGRAM:

Cloud

Upload to cloud

Download encrypt
Key generate and
content
encrypt content

Generate Aggregate key and send


User 1 User 2

Decrypt content

Using Aggregate key

DATA FLOW DIAGRAM:

51
1. The DFD is also called as bubble chart. It is a simple graphical formalism
that can be used to represent a system in terms of input data to the system,
various processing carried out on this data, and the output data is generated
by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It
is used to model the system components. These components are the system
process, the data used by the process, an external entity that interacts with
the system and the information flows in the system.
3. DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that
depicts information flow and the transformations that are applied as data
moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a
system at any level of abstraction. DFD may be partitioned into levels that
represent increasing information flow and functional detail.

52
User
Upload to cloud

Upload Download

Key generates and Download encrypt


encrypts content content

Aggregate Key

Decrypt content

Using Aggregate key

53
UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object Management
Group.
The goal is for UML to become a common language for creating models of
object oriented computer software. In its current form UML is comprised of two
major components: a Meta-model and a notation. In the future, some form of
method or process may also be added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as
well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have proven
successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and
the software development process. The UML uses mostly graphical notations to
express the design of software projects.
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
USE CASE DIAGRAM:

54
A use case diagram in the Unified Modeling Language (UML) is a type of
behavioral diagram defined by and created from a Use-case analysis. Its purpose is
to present a graphical overview of the functionality provided by a system in terms
of actors, their goals (represented as use cases), and any dependencies between
those use cases. The main purpose of a use case diagram is to show what system
functions are performed for which actor. Roles of the actors in the system can be
depicted.

SEQUENCE DIAGRAM:

A sequence diagram in Unified Modeling Language (UML) is a kind of interaction


diagram that shows how processes operate with one another and in what order. It is
a construct of a Message Sequence Chart. Sequence diagrams are sometimes called
event diagrams, event scenarios, and timing diagrams.

55
ACTIVITY DIAGRAM:

Activity diagrams are graphical representations of workflows of stepwise activities


and actions with support for choice, iteration and concurrency. In the Unified
Modeling Language, activity diagrams can be used to describe the business and
operational step-by-step workflows of components in a system. An activity
diagram shows the overall flow of control.

56
57
SYSTEM TESTING

The purpose of testing is to discover errors. Testing is the process of trying to


discover every conceivable fault or weakness in a work product. It provides a way
to check the functionality of components, sub assemblies, assemblies and/or a
finished product It is the process of exercising software with the intent of ensuring
that the

58
Software system meets its requirements and user expectations and does not fail in
an unacceptable manner. There are various types of test. Each test type addresses a
specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs. All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the
completion of an individual unit before integration. This is a structural testing, that
relies on knowledge of its construction and is invasive. Unit tests perform basic
tests at component level and test a specific business process, application, and/or
system configuration. Unit tests ensure that each unique path of a business process
performs accurately to the documented specifications and contains clearly defined
inputs and expected results.
Integration testing
Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is more
concerned with the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually satisfaction, as shown
by successfully unit testing, the combination of components is correct and
consistent. Integration testing is specifically aimed at exposing the problems that
arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system
documentation, and user manuals.
59
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be accepted.
Invalid Input : identified classes of invalid input must be rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs must be exercised.
Systems/Procedures: interfacing systems or procedures must be invoked.
Organization and preparation of functional tests is focused on requirements, key
functions, or special test cases. In addition, systematic coverage pertaining to
identify Business process flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional testing is complete,
additional tests are identified and the effective value of current tests is determined.
System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing pre-driven
process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the software tester has
knowledge of the inner workings, structure and language of the software, or at least
its purpose. It is purpose. It is used to test areas that cannot be reached from a black
box level.
Black Box Testing
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as
most other kinds of tests, must be written from a definitive source document, such
as specification or requirements document, such as specification or requirements
60
document. It is a testing in which the software under test is treated, as a black
box .you cannot “see” into it. The test provides inputs and responds to outputs
without considering how the software works.
6.1 Unit Testing:
Unit testing is usually conducted as part of a combined code and unit test
phase of the software lifecycle, although it is not uncommon for coding and unit
testing to be conducted as two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional tests will be written
in detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.
6.2 Integration Testing
Software integration testing is the incremental integration testing of two or
more integrated software components on a single platform to produce failures
caused by interface defects.
The task of the integration test is to check that components or software
applications, e.g. components in a software system or – one step up – software
applications at the company level – interact without error.

61
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.
6.3 Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system meets the
functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects
encountered.

62
CODING

63
64
SCREEN
SHOTS

65
66
CONCLUSION

10. CONCLUSION AND FUTURE WORK

How to protect users’ data privacy is a central question of cloud storage. With
more mathematical tools, cryptographic schemes are getting more versatile and
often involve multiple keys for a single application. In this paper, we consider how
to “compress” secret keys in public-key cryptosystems which support delegation of
secret keys for different ciphertext classes in cloud storage. No matter which one
among the power set of classes, the delegatee can always get an aggregate key of

67
constant size. Our approach is more flexible than hierarchical key assignment
which can only save spaces if all key-holders share a similar set of privileges.
A limitation in our work is the predefined bound of the number of maximum
ciphertext classes. In cloud storage, the number of ciphertexts usually grows
rapidly. So we have to reserve enough ciphertext classes for the future extension.
Otherwise, we need to expand the public-key.
Although the parameter can be downloaded with ciphertexts, it would be better
if its size is independent of the maximum number of ciphertext classes. On the
other hand, when one carries the delegated keys around in a mobile device without
using special trusted hardware, the key is prompt to leakage, designing a leakage-
resilient cryptosystem yet allows efficient and flexible key delegation is also an
interesting direction.

68
BIBLIOGRAPHY

REFERENCES
[1] S.S.M. Chow, Y.J. He, L.C.K. Hui, and S.-M. Yiu, “SPICE – Simple Privacy-
Preserving Identity-Management for Cloud Environment,” Proc. 10th Int’l Conf.
Applied Cryptography and Network Security (ACNS), vol. 7341, pp. 526-543,
2012.
[2] L. Hardesty, Secure Computers Aren’t so Secure. MIT press, http://
www.physorg.com/news176107396.html, 2009.

69
[3] C. Wang, S.S.M. Chow, Q. Wang, K. Ren, and W. Lou, “Privacy-Preserving
Public Auditing for Secure Cloud Storage,” IEEE Trans. Computers, vol. 62, no. 2,
pp. 362-375, Feb. 2013.
[4] B. Wang, S.S.M. Chow, M. Li, and H. Li, “Storing Shared Data on the Cloud
via Security-Mediator,” Proc. IEEE 33rd Int’l Conf. Distributed Computing
Systems (ICDCS), 2013.
[5] S.S.M. Chow, C.-K. Chu, X. Huang, J. Zhou, and R.H. Deng, “Dynamic Secure
Cloud Storage with Provenance,” Cryptography and Security, pp. 442-464,
Springer, 2012.
[6] D. Boneh, C. Gentry, B. Lynn, and H. Shacham, “Aggregate and Verifiably
Encrypted Signatures from Bilinear Maps,” Proc. 22nd Int’l Conf. Theory and
Applications of Cryptographic Techniques (EUROCRYPT ’03), pp. 416-432,
2003.
[7] M.J. Atallah, M. Blanton, N. Fazio, and K.B. Frikken, “Dynamic and Efficient
Key Management for Access Hierarchies,” ACM Trans. Information and System
Security, vol. 12, no. 3, pp. 18:1-18:43, 2009.
[8] J. Benaloh, M. Chase, E. Horvitz, and K. Lauter, “Patient Controlled
Encryption: Ensuring Privacy of Electronic Medical Records,” Proc. ACM
Workshop Cloud Computing Security (CCSW ’09), pp. 103-114, 2009.

[9] F. Guo, Y. Mu, Z. Chen, and L. Xu, “Multi-Identity Single-Key Decryption


without Random Oracles,” Proc. Information Security and Cryptology (Inscrypt
’07), vol. 4990, pp. 384-398, 2007.
[10] V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-Based Encryption
for Fine-Grained Access Control of Encrypted Data,” Proc. 13th ACM Conf.
Computer and Comm. Security (CCS ’06), pp. 89-98, 2006.

70
[11] S.G. Akl and P.D. Taylor, “Cryptographic Solution to a Problem of Access
Control in a Hierarchy,” ACM Trans. Computer Systems, vol. 1, no. 3, pp. 239-
248, 1983.
[12] G.C. Chick and S.E. Tavares, “Flexible Access Control with Master Keys,”
Proc. Advances in Cryptology (CRYPTO ’89), vol. 435, pp. 316-322, 1989.
[13] W.-G. Tzeng, “A Time-Bound Cryptographic Key Assignment Scheme for
Access Control in a Hierarchy,” IEEE Trans. Knowledge and Data Eng., vol. 14,
no. 1, pp. 182-188, Jan./Feb. 2002.
[14] G. Ateniese, A.D. Santis, A.L. Ferrara, and B. Masucci, “Provably-Secure
Time-Bound Hierarchical Key Assignment Schemes,” J. Cryptology, vol. 25, no. 2,
pp. 243-270, 2012.
[15] R.S. Sandhu, “Cryptographic Implementation of a Tree Hierarchy for Access
Control,” Information Processing Letters, vol. 27, no. 2, pp. 95-98, 1988.
[16] Y. Sun and K.J.R. Liu, “Scalable Hierarchical Access Control in Secure Group
Communications,” Proc. IEEE INFOCOM ’04, 2004.
[17] Q. Zhang and Y. Wang, “A Centralized Key Management Scheme for
Hierarchical Access Control,” Proc. IEEE Global Telecomm. Conf. (GLOBECOM
’04), pp. 2067-2071, 2004.
[18] J. Benaloh, “Key Compression and Its Application to Digital Fingerprinting,”
technical report, Microsoft Research, 2009.

[19] B. Alomair and R. Poovendran, “Information Theoretically Secure Encryption


with Almost Free Authentication,” J. Universal Computer Science, vol. 15, no. 15,
pp. 2937-2956, 2009.
[20] D. Boneh and M.K. Franklin, “Identity-Based Encryption from the Weil
Pairing,” Proc. Advances in Cryptology (CRYPTO ’01), vol. 2139, pp. 213-229,
2001.
71
[21] A. Sahai and B. Waters, “Fuzzy Identity-Based Encryption,” Proc. 22nd Int’l
Conf. Theory and Applications of Cryptographic Techniques (EUROCRYPT ’05),
vol. 3494, pp. 457-473, 2005.
[22] S.S.M. Chow, Y. Dodis, Y. Rouselakis, and B. Waters, “Practical Leakage-
Resilient Identity-Based Encryption from Simple Assumptions,” Proc. ACM Conf.
Computer and Comm. Security, pp. 152-161, 2010.
[23] F. Guo, Y. Mu, and Z. Chen, “Identity-Based Encryption: How to Decrypt
Multiple Ciphertexts Using a Single Decryption Key,” Proc. Pairing-Based
Cryptography Conf. (Pairing ’07), vol. 4575, pp. 392-406, 2007.
[24] M. Chase and S.S.M. Chow, “Improving Privacy and Security in Multi-
Authority Attribute-Based Encryption,” Proc. ACM Conf. Computer and Comm.
Security, pp. 121-130. 2009,
[25] T. Okamoto and K. Takashima, “Achieving Short Ciphertexts or Short Secret-
Keys for Adaptively Secure General Inner-Product Encryption,” Proc. 10th Int’l
Conf. Cryptology and Network Security (CANS ’11), pp. 138-159, 2011.
[26] R. Canetti and S. Hohenberger, “Chosen-Ciphertext Secure Proxy Re-
Encryption,” Proc. 14th ACM Conf. Computer and Comm. Security (CCS ’07), pp.
185-194, 2007.

[27] C.-K. Chu and W.-G. Tzeng, “Identity-Based Proxy Re-encryption without
Random Oracles,” Proc. Information Security Conf. (ISC ’07), vol. 4779, pp. 189-
202, 2007.
[28] C.-K. Chu, J. Weng, S.S.M. Chow, J. Zhou, and R.H. Deng, “Conditional
Proxy Broadcast Re-Encryption,” Proc. 14th Australasian Conf. Information
Security and Privacy (ACISP ’09), vol. 5594, pp. 327-342, 2009.

72
[29] S.S.M. Chow, J. Weng, Y. Yang, and R.H. Deng, “Efficient Unidirectional
Proxy Re-Encryption,” Proc. Progress in Cryptology (AFRICACRYPT ’10), vol.
6055, pp. 316-332, 2010.
[30] G. Ateniese, K. Fu, M. Green, and S. Hohenberger, “Improved Proxy Re-
Encryption Schemes with Applications to Secure Distributed Storage,” ACM
Trans. Information and System Security, vol. 9, no. 1, pp. 1-30, 2006.
[31] D. Boneh, C. Gentry, and B. Waters, “Collusion Resistant Broadcast
Encryption with Short Ciphertexts and Private Keys,” Proc. Advances in
Cryptology Conf. (CRYPTO ’05), vol. 3621, pp. 258-275, 2005.
[32] L.B. Oliveira, D. Aranha, E. Morais, F. Daguano, J. Lopez, and R. Dahab,
“TinyTate: Computing the Tate Pairing in Resource-Constrained Sensor Nodes,”
Proc. IEEE Sixth Int’l Symp. Network Computing and Applications (NCA ’07),
pp. 318-323, 2007.
[33] D. Naor, M. Naor, and J. Lotspiech, “Revocation and Tracing Schemes for
Stateless Receivers,” Proc. Advances in Cryptology Conf. (CRYPTO ’01), pp. 41-
62, 2001.
[34] T.H. Yuen, S.S.M. Chow, Y. Zhang, and S.M. Yiu, “Identity-Based Encryption
Resilient to Continual Auxiliary Leakage,” Proc. Advances in Cryptology Conf.
(EUROCRYPT ’12), vol. 7237, pp. 117-134, 2012.

[35] D. Boneh, X. Boyen, and E.-J. Goh, “Hierarchical Identity Based Encryption
with Constant Size Ciphertext,” Proc. Advances in Cryptology Conf.
(EUROCRYPT ’05), vol. 3494, pp. 440-456, 2005.
[36] D. Boneh, R. Canetti, S. Halevi, and J. Katz, “Chosen-Ciphertext Security
from Identity-Based Encryption,” SIAM J. Computing, vol. 36, no. 5, pp. 1301-
1328, 2007.

73

S-ar putea să vă placă și