Sunteți pe pagina 1din 8

A

SYNOPSIS REPORT

ON

DROPS: Division and Replication of Data in Cloud for Optimal


Performance and Security

Abstract:-

Outsourcing data to a third-party administrative control, as is done in cloud


computing, gives rise to security concerns. The data compromise may occur due to
attacks by other users and nodes within the cloud. Therefore, high security
measures are required to protect data within the cloud. However, the employed
security strategy must also take into account the optimization of the data retrieval
time. In this paper, we propose Division and Replication of Data in the Cloud for
Optimal Performance and Security (DROPS) that collectively approaches the
security and performance issues. In the DROPS methodology, we divide a file into
fragments, and replicate the fragmented data over the cloud nodes. Each of the
nodes stores only a single fragment of a particular data file that ensures that even in
case of a successful attack, no meaningful information is revealed to the attacker.
Moreover, the nodes storing the fragments are separated with certain distance by
means of graph T-coloring to prohibit an attacker of guessing the locations of the
fragments. Furthermore, the DROPS methodology does not rely on the traditional
cryptographic techniques for the data security; thereby relieving the system of
computationally expensive methodologies. We show that the probability to locate
and compromise all of the nodes storing the fragments of a single file is extremely
low. We also compare the performance of the DROPS methodology with ten other
schemes. The higher level of security with slight performance overhead was
observed.
Introduction:The cloud computing paradigm has reformed the usage and management of the
information technology infrastructure. Cloud computing is characterized by ondemand self-services, ubiquitous network accesses, resource pooling, elasticity,
and measured services. The aforementioned characteristics of cloud computing
make it a striking candidate for businesses, organizations, and individual users for
adoption. However, the benefits of low-cost, negligible management (from a users
perspective), and greater flexibility come with increased security concerns.
Security is one of the most crucial aspects among those prohibiting the widespread adoption of cloud computing. Cloud security issues may stem due to the
core technologies implementation (virtual machine (VM) escape, session riding,
etc.), cloud service offerings (structured query language injection, weak
authentication schemes, etc.), and arising from cloud characteristics (data recovery
vulnerability, Internet protocol vulnerability, etc.). For a cloud to be secure, all of
the participating entities must be secure. For a cloud to be secure, all of the
participating entities must be secure. In any given system with multiple units, the
highest level of the systems security is equal to the security level of the weakest
entity. Therefore, in a cloud, the security of the assets does not solely depend on an

individuals security measure. The neighboring entities may provide an opportunity


to an attacker to bypass the users defenses.
Problem Statement:
To ensure security and data storage efficiency in cloud, integrity checking is
designed effectively. Enhance the mechanisms work of the integrity checking
against the service attacks and threads. End user can store the data in cloud at
anytime and anywhere through internet.
Proposed System:We proposed the methodology with cloud storage security scheme that collectively
deals with the security and performance in terms of retrieval time. The data file
was first encrypted and fragmented. The fragments are dispersed over multiple
nodes. The nodes were separated by means of T-coloring. The fragmentation and
dispersal ensured that no significant information was obtainable by an adversary in
case of a successful attack. No node in the cloud, stored more than a single
fragment of the same file. The performance of the methodology was compared
with full-scale replication techniques. The results of the simulations revealed that
the simultaneous focus on the security and performance resulted in increased
security level of data accompanied by a slight performance. Currently with the
DROPS methodology, a user has to download the file, update the contents, and
upload it again. It is strategic to develop an automatic update mechanism that can
identify and update the required fragments only. The aforesaid future work will
save the time and resources utilized in downloading, updating, and uploading the
file again. This can be beneficial for user.

Objectives:-

We develop a scheme for outsourced data that takes into account both the
security and performance. The proposed scheme fragments and replicates
the data file over cloud nodes.
The proposed DROPS scheme ensures that even in the case of a successful
attack, no meaningful information is revealed to the attacker.
We do not rely on traditional cryptographic techniques for data security. The
non-cryptographic nature of the proposed scheme makes it faster to perform
the required operations (placement and retrieval) on the data.
We ensure a controlled replication of the file fragments, where each of the
fragments is replicated only once for the purpose of improved security.
Algorithms:Algorithm 1: Algorithm for fragment placement
Input:-Fragments
Output:-Nodes with Fragment Allocation.
Once the file is split into fragments, the DROPS methodology selects the cloud
nodes for fragment placement. The selection is made by keeping an equal focus on
both security and performance in terms of the access time. We choose the nodes
that are most central to the cloud network to provide better access time. For the
aforesaid purpose, the DROPS methodology uses the concept of centrality to
reduce access time. For this purpose we are using this algorithm.

Algorithm 2: Algorithm for fragments replication


Input:-Fragments.
Output:-Replicas of Fragment.
In addition to placing the fragments on the central nodes, we also perform a
controlled replication to increase the data availability, reliability, and improve data
retrieval time. We place the fragment on the node that provides the decreased
access cost with an objective to improve retrieval time for accessing the fragments
for reconstruction of original file. While replicating the fragment, the separation of
fragments placement technique through T-coloring, is also taken care off. In case of

a large number of fragments or small number of nodes, it is also possible that some
of the fragments are left without being replicated because of the T-coloring.
To create replicas and allocate those replicas at separated distance we are using this
algorithm.

Software Requirement:
Operating System

Windows XP/7

Programming Language -

Java/J2EE

Software Version

JDK 1.7 or above

Tools

Eclipse

Front End

JSP

Hardware Requirement:
Processor

Pentium IV/Intel I3 core

Speed

1.1 GHz

RAM

512 MB (min)

Hard Disk

20GB

Keyboard

Standard Keyboard

Mouse

Two or Three Button Mouse

Monitor

LED Monitor

Reference:-

1) K. Bilal, S. U. Khan, L. Zhang, H. Li, K. Hayat, S. A. Madani, N. MinAllah, L. Wang, D. Chen, M. Iqbal, C. Z. Xu, and A. Y. Zomaya,
Quantitative comparisons of the state of the art data center architectures,
Concurrency and Computation: Practice and Experience, Vol. 25, No. 12,
2013, pp. 1771-1783.
2) K. Bilal, M. Manzano, S. U. Khan, E. Calle, K. Li, and A. Zomaya, On the
characterization of the structural robustness of data center networks, IEEE
Transactions on Cloud Computing, Vol. 1, No. 1, 2013, pp. 64-77.
3) D. Boru, D. Kliazovich, F. Granelli, P. Bouvry, and A. Y. Zomaya, Energyefficient data replication in cloud computing datacenters, In IEEE
Globecom Workshops, 2013, pp. 446-451. .
4) Y. Deswarte, L. Blain, and J-C. Fabre, Intrusion tolerance in distributed
computing systems, In Proceedings of IEEE Computer Society Symposium
on Research in Security and Privacy, Oakland CA, pp. 110-121, 1991.
5) B. Grobauer, T.Walloschek, and E. Stocker, Understanding cloud
computing vulnerabilities, IEEE Security and Privacy, Vol. 9, No. 2, 2011,
pp. 50-57.
6) W. K. Hale, Frequency assignment: Theory and applications, Proceedings
of the IEEE, Vol. 68, No. 12, 1980, pp. 1497-1514.

System Architecture:-

Figure: System Architecture

Modules:-

1) Cloud Client:Cloud client should be Data owner or Data user.


Data Owner:Data owner is responsible for uploading file on cloud as well as view
files uploaded by him or others. Data owner has information about the
placed fragment and its replicas with their node numbers in cloud.
Data User:Data user is the one who is responsible for downloading files or view
files uploaded by others. To download file from cloud he has to be
authenticated user otherwise he will be considered as attacker.
2) Cloud Server:Fragmentation:This approach is used for fragmenting the file for security purpose at
sever side. This approach runs the Fragmentation algorithm. It has file
as input and produces the file fragments as output.
Replication:This approach creates replicas (duplicate copy) of fragments. These
replicas are useful when one of fragment is corrupted by attacker then
to provide file for user admin replaces its replica at that place and
combine all fragments and send file to authenticated user or data
owner. To make replicas of file fragments this approach runs
replication algorithm which takes input as fragments and produces its
replicas as output.
Allocation:After the file is spitted and replicas are generated then we have to
allocate that fragments at cloud server for storing data. While storing
or allocating that fragments we have consider security issues. So we
are using T-Coloring Graph concept for placing fragments at different
nodes on cloud server. This approach runs Fragment allocation
algorithm which takes input as fragments and produces the output as
fragments allocated with node numbers.

S-ar putea să vă placă și