Documente Academic
Documente Profesional
Documente Cultură
Improved Reliability
ABSTRACT
• Data deduplication is a technique for eliminating duplicate
copies of data.
• Used in cloud storage to reduce storage space and upload
bandwidth.
• there is only one copy for each file stored in cloud
• The propose new distributed deduplication systems with
higher reliability.
• which the data chunks are distributed across multiple cloud
servers.
EXISTING SYSTEM
• Users are keeping multiple data copies with the same content,
it leads to data redundancy.
• In existing work file is eliminated base on the file name, the
content of the particular file is not verified.
DISADVANTAGES OF EXISTING SYSTEM
• In this module Owner uploads the file(along with meta data) into
database.
• This metadata and its contents, the end user has to download the file.
REFERENCES
• Gantz and D. Reinsel. (2012, Dec.). The digital universe in 2020: Big
data, bigger digital shadows, and biggest growth in the fareast.
• M. O. Rabin, “Fingerprinting by random polynomials,” Center for Res.
Comput. Technol., Harvard Univ., Tech. Rep. TR-CSE-03-01,1981.
• J. R. Douceur, A. Adya, W. J. Bolosky, D. Simon, and M. Theimer,
“Reclaiming space from duplicate files in a serverless distributed file
system,” in Proc. 22nd Int. Conf. Distrib. Comput. Syst., 2002, pp.
617–624.
• M. Bellare, S. Keelveedhi, and T. Ristenpart, “Dupless: Server aided
encryption for deduplicated storage,” in Proc. 22nd USENIX Conf.
Secur. Symp., 2013, pp. 179–194.
• M. Bellare, S. Keelveedhi, and T. Ristenpart, “Message-locked
encryption and secure deduplication,” in Proc. EUROCRYPT, 2013, pp.
296–312.