Documente Academic
Documente Profesional
Documente Cultură
OBJECTIVE:
PURPOSE:
The client is permitted to perform the duplicate copy check for records selected
with the particular subject. The complex subject to help stronger security by
encoding the record with distinct privilege keys. Decrease the storage space of
the tags for reliability check. To strengthen the security of DE-duplication and
ensure the data privacy.
SCOPE:
The data sharing system consists of the following system entities: 1. Key
generation center. It is a key authority that generates public and secret
parameters for CP-ABE. It is in charge of issuing, revoking, and updating
attribute keys for users. It grants differential access rights to individual users
based on their attributes. It is assumed to be honest-but-curious. That is, it will
honestly execute the assigned tasks in the system; however, it would like to
learn information of encrypted contents as much as possible. Thus, it should be
prevented from accessing the plaintext of the encrypted data even if it is honest.
2. Data-storing center. It is an entity that provides a data sharing service. It is in
charge of controlling the accesses from outside users to the storing data and
providing corresponding contents services. The data-storing center is another
key authority that generates personalized user key with the KGC, and issues and
revokes attribute group keys to valid users per each attribute, which are used to
enforce a fine-grained user access control. Similar to the previous schemes we
assume the data-storing center is also semi trusted (that is, honest-but-curious)
like the KGC. 3. Data owner. It is a client who owns data, and wishes to upload
it into the external data-storing center for ease of sharing or for cost saving. A
data owner is responsible for defining (attribute-based) access policy, and
enforcing it on its own data by encrypting the data under the policy before
distributing it. 4. User. It is an entity who wants to access the data. If a user
possesses a set of attributes satisfying the access policy of the encrypted data,
and is not revoked in any of the valid attribute groups, then he will be able to
decrypt the cipher text and obtain the data. Since both of the key managers, the
KGC and the data storing center, are semi trusted, they should be deterred from
accessing plaintext of the data to be shared; meanwhile, they should be still able
to issue secret keys to users. In order to realize this somewhat contradictory
requirement, the two parties engage in the arithmetic 2PC protocol with master
secret keys of their own, and issue independent key components to users during
the key issuing phase. The 2PC protocol deters them from knowing each others
master secrets so that none of them can generate the whole set of secret keys of
users individually. Thus, we take an assumption that the KGC does not collude
with the data-storing center since they are honest as in previous system.
OVERVIEW:
The cloud computing paradigm is the next generation architecture for the
business of information technology, which presents to its users some huge
benets in terms of computationalcosts, storage costs, bandwidth and
transmission costs [1]. Typically, the cloud technology transfers all the data,
databases and softwares over the Internet for the purpose of achieving huge cost
savings for the CSP. In cloud computing [1], services such as Infrastructure-as-a
Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), and
Database as-a-Service (DaaS) are currently oered. This thesis is concerned
with the DaS property of cloud computing, among which cloud data storages
services are of main interest. Dropbox, Mozy, and SpiderOak are examples of
popular cloud storages [1].With the advent of cloud computing and its digital
storage services, the growth of digital content has become irrepressible at both
the enterprise and individual levels. According to the EMC Digital Universe
Study [2], the global data supply has reached 2.8 trillion Giga Byte (GB) in
2012, but just 0.5% of it was used for various kind of analysis purposes. The
same study has also revealed that volumes of data are projected to reach about
5247 GB per person by 2020. Due to this explosive growth of digital data, there
is a clear demand from CSPs for more cost e ective use of their storage and
network bandwidth for data transfer purpose. Also the use of Internet and other
digital services have given rise to a digital data explosion, including those in
cloud storages. A survey [2] revealed that only 25% of the data in data
warehouses are unique in data warehouses. In the presence of this huge problem
of big data, it would be benecial in terms of storage savings if the replicated
data could be removed from the data storages. According to a survey [3], only
25 GB of the total data for each individual user are unique and the remaining
ones are similar shared data among
various users. On the enterprise level [4], it was reported that businesses hold an
average of three to ve copies of les, with 15% to 25% of these organizations
having more than 10 copies of the les. Keeping these facts in view, CSPs have
massively adopted data deduplication , a technique that allows the CSP to save
some storage space by storing only a single copy of previously duplicated data.
A major issue hindering the acceptance of cloud storage services by users is the
data privacy issue associated with the cloud paradigm. Indeed, although data is
outsourced in its encrypted form, there is no guarantee of data privacy when an
honest but curious CSP handles the management of condential data while these
data reside in the cloud. This problem is even more challenging when data
deduplication is performed by the CSP as a way to achieve cost savings. Data
deduplication is being adopted by many popular cloud storage vendors like
Dropbox, Spider Oak, Waula and Mozy. Recently, Dropbox [5] was reported to
have stolen millions of passwords of some users, but they shifted the blame to
the users by indicating that the hackers stole those passwords from some other
servers and then used them to access Dropbox. But whatever the case is, the
users cannot aord to compromise the security of their own data, therefore the
attack was perpetrated either by an anonymous external attacker or it was an
internal fault attributed to the CSP. In the light of above discussion, it is a dire
need of current big data and cloud computing paradigm to handle to issue of
data growth. Data deduplication is an eective technique exercised to handled
the issue. The primary condition is that the data deduplication should be
designed in accordance to the security and eciency requirements of the
system in consideration.
FUNCTIONAL REQUIREMENT:
Once the key request was received, the sender can send the key or he can
decline it. With this key and request id which was generated at the time of
sending key request the receiver can decrypt the message.
PERFORMANCE REQUIREMENT:
The key criteria of any hybrid cloud storage solution involve basic data
retention and off-site failover of mission critical applications. Here are a few of
the absolute must-have requirements buyers should evaluate:
Performance -- What speeds are required for efficient transfer of small and
large amounts of data? How do low-latency applications get the data they need
with the minimum amount of delay? While you won't typically get any latency
numbers form either Amazon or Microsoft, you shouldn't expect to run any
latency critical applications in a hybrid mode. That's different if you're doing
work directly on their platform such as between EC2 to S3 or within Azure.
Integration -- How well does the solution integrate with existing systems to
include Microsoft Active Directory or LDAP for user access control? Does the
solution integrate with existing management tools, such as Microsoft System
Center?
Service Level Agreements -- How are SLAs established, and what levels are
possible? A key requirement that businesses rely on when using cloud services
is guarantee of availability. Microsoft's Windows Azure Backup service
advertises a 99.9 percent availability rate as does Amazon's S3 service. If a
disruption in service should happen, then part of the agreement usually includes
some form of credit given back to the customer. Amazon offers a service credit
should availability fall below that availability level over a one-month period.
Application Specific -- Some applications, like "big data" or disaster recovery,
may have specific requirements or performance levels. How do hybrid cloud
storage providers address this?
Pricing -- The cost of using hybrid cloud storage is based on the service
components used and the storage itself. For example, Amazon's Simple Store
Service (S3) Gateway costs include a base monthly charge plus a cost per GB of
data stored. They also charge you for bandwidth used to retrieve any data from
their storage. On the other hand, Microsoft offers a flat fee per GB of storage
which you can compute based on their cost information.
SOFTWARE INTERFACE:
Scripts : JavaScript.
Chapter 3
3.1 Features of Android OS
Application framework enabling reuse and replacement of
components
Media support for common audio, video, and still image formats
(MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, GIF)
Developers have full access to the same framework APIs used by the core
applications. The application architecture is designed to simplify the reuse of
components; any application can publish its capabilities and any other
application may then make use of those capabilities (subject to security
constraints enforced by the framework). This same mechanism allows
components to be replaced by the user.
Underlying all applications is a set of services and systems, including:
A rich and extensible set of Views that can be used to build an
application, including lists, grids, text boxes, buttons, and even an embeddable
web browser
Content Providers that enable applications to access data from other
applications (such as Contacts), or to share their own data
A Resource Manager, providing access to non-code resources such as
localized strings, graphics, and lat files
A Notification Manager that enables all applications to display custom
alerts in the status bar
An Activity Manager that manages the life cycle of applications and
provides a common navigation backstack
4.1.2 Libraries
In most cases, every Android application runs in its own Linux process.
This process is created for the application when some of its code needs to be
run, and will remain running until it is no longer needed and the system needs to
reclaim its memory for use by other applications.
An important and unusual feature of Android is that an application
process's lifetime is not directly controlled by the application itself. Instead, it is
determined by the system through a combination of the parts of the application
that the system knows are running, how important these things are to the user,
and how much overall memory is available in the system.
It is important that application developers understand how different
application components (in particular Activity, Service, and IntentReceiver)
impact the lifetime of the application's process. Not using these components
correctly can result in the system killing the application's process while it is
doing important work.
A common example of a process life-cycle bug is an IntentReceiver that
starts a thread when it receives an Intent in its onReceiveIntent() method, and
then returns from the function. Once it returns, the system considers that
IntentReceiver to be no longer active, and thus its hosting process no longer
needed (unless other application components are active in it). Thus, it may kill
the process at any time to reclaim memory, terminating the spawned thread that
is running in it. The solution to this problem is to start a Service from the
IntentReceiver, so the system knows that there is still active work being done in
the process.
To determine which processes should be killed when low on memory,
Android places them into an "importance hierarchy" based on the components
running in them and the state of those components. These are, in order of
importance:
1. A foreground process is one holding an Activity at the top of the
screen that the user is interacting with (its onResume () method has been called)
or an IntentReceiver that is currently running (its onReceiveIntent () method is
executing). There will only ever be a few such processes in the system, and
these will only be killed as a last resort if memory is so low that not even these
processes can continue to run. Generally at this point the device has reached a
memory paging state, so this action is required in order to keep the user
interface responsive.
2. A visible process is one holding an Activity that is visible to the user
on-screen but not in the foreground (its onPause() method has been called). This
may occur, for example, if the foreground activity has been displayed with a
dialog appearance that allows the previous activity to be seen behind it. Such a
process is considered extremely important and will not be killed unless doing so
is required to keep all foreground processes running.
3. A service process is one holding a Service that has been started with
the startService() method. Though these processes are not directly visible to the
user, they are generally doing things that the user cares about (such as
background mp3 playback or background network data upload or download), so
the system will always keep such processes running unless there is not enough
memory to retain all foreground and visible process.
4. A background process is one holding an Activity that is not
currently visible to the user (its onStop() method has been called). These
processes have no direct impact on the user experience. Provided they
implement their activity life cycle correctly (see Activity for more details), the
system can kill such processes at any time to reclaim memory for one of the
three previous processes types. Usually there are many of these processes
running, so they are kept in an LRU list to ensure the process that was most
recently seen by the user is the last to be killed when running low on memory.
5. An empty process is one that doesn't hold any active application
components. The only reason to keep such a process around is as a cache to
improve startup time the next time a component of its application needs to run.
As such, the system will often kill these processes in order to balance overall
system resources between these empty cached processes and the underlying
kernel caches.
When deciding how to classify a process, the system picks the most
important level of all the components currently active in the process.
7.2 Security and Permissions in Android
The Android SDK includes a variety of custom tools that help develop
mobile applications on the Android platform. The most important of these are
the Android Emulator and the Android Development Tools plugin for Eclipse,
but the SDK also includes a variety of other tools for debugging, packaging, and
installing r applications on the emulator.
Android Emulator
A virtual mobile device that runs on computer use the emulator to design,
debug, and test r applications in an actual Android run-time environment.
Android Development Tools Plugin for the Eclipse IDE
The ADT plugin adds powerful extensions to the Eclipse integrated
environment, making creating and debugging r Android applications easier and
faster. If use Eclipse, the ADT plugin gives an incredible boost in developing
Android applications:
It gives access to other Android development tools from inside the
Eclipse IDE. For example, ADT lets access the many capabilities of the DDMS
tool taking screenshots, managing port-forwarding, setting breakpoints, and
viewing thread and process information directly from Eclipse.
It provides a New Project Wizard, which helps quickly create and set
up all of the basic filesll need for a new Android application.
It automates and simplifies the process of building r Android
application.
It provides an Android code editor that helps write valid XML for r
Android manifest and resource files.
HARDWARE INTERFACE:
Hard Disk - 20 GB
Monitor - SVGA
Interfaces