Sunteți pe pagina 1din 38

SYSTEM REQUIREMENT SPECIFICATION

OBJECTIVE:

Cloud computing provides seemingly unlimited virtualized resources to users


as services across the whole Internet, while hiding platform and implementation
details. Todays cloud service providers offer both highly available storage and
massively parallel computing resources at relatively low costs. As cloud
computing becomes prevalent, an increasing amount of data is being stored in
the cloud and shared by users with specified privileges, which define the access
rights of the stored data. One critical challenge of cloud storage services is the
management of the ever-increasing volume of data. To make data management
scalable in cloud computing, deduplication has been a well-known technique
and has attracted more and more attention recently. Data deduplication is a
specialized data compression technique for eliminating duplicate copies of
repeating data in storage. The technique is used to improve storage utilization
and can also be applied to network data transfers to reduce the number of bytes
that must be sent. Instead of keeping multiple data copies with the same content,
deduplication eliminates redundant data by keeping only one physical copy and
referring other redundant data to that copy. Deduplication can takeplace at either
the file level or the block level. For filelevel deduplication, it eliminates
duplicate copies of the same file. Deduplication can also take place at the block
level, which eliminates duplicate blocks of data that occur in non-identical files.

PURPOSE:

The client is permitted to perform the duplicate copy check for records selected
with the particular subject. The complex subject to help stronger security by
encoding the record with distinct privilege keys. Decrease the storage space of
the tags for reliability check. To strengthen the security of DE-duplication and
ensure the data privacy.
SCOPE:

Scope survey is the most important step in software development process.


Before developing the tool it is necessary to determine the time factor, economy
n company strength. Once these things are satisfied, then next steps are to
determine which operating system and language can be used for developing the
tool. Once the programmers start building the tool the programmers need lot of
external support. This support can be obtained from senior programmers, from
book or from websites. Before building the system the above consideration are
taken into account for developing the proposed system.

System Description and Key Management:

The data sharing system consists of the following system entities: 1. Key
generation center. It is a key authority that generates public and secret
parameters for CP-ABE. It is in charge of issuing, revoking, and updating
attribute keys for users. It grants differential access rights to individual users
based on their attributes. It is assumed to be honest-but-curious. That is, it will
honestly execute the assigned tasks in the system; however, it would like to
learn information of encrypted contents as much as possible. Thus, it should be
prevented from accessing the plaintext of the encrypted data even if it is honest.
2. Data-storing center. It is an entity that provides a data sharing service. It is in
charge of controlling the accesses from outside users to the storing data and
providing corresponding contents services. The data-storing center is another
key authority that generates personalized user key with the KGC, and issues and
revokes attribute group keys to valid users per each attribute, which are used to
enforce a fine-grained user access control. Similar to the previous schemes we
assume the data-storing center is also semi trusted (that is, honest-but-curious)
like the KGC. 3. Data owner. It is a client who owns data, and wishes to upload
it into the external data-storing center for ease of sharing or for cost saving. A
data owner is responsible for defining (attribute-based) access policy, and
enforcing it on its own data by encrypting the data under the policy before
distributing it. 4. User. It is an entity who wants to access the data. If a user
possesses a set of attributes satisfying the access policy of the encrypted data,
and is not revoked in any of the valid attribute groups, then he will be able to
decrypt the cipher text and obtain the data. Since both of the key managers, the
KGC and the data storing center, are semi trusted, they should be deterred from
accessing plaintext of the data to be shared; meanwhile, they should be still able
to issue secret keys to users. In order to realize this somewhat contradictory
requirement, the two parties engage in the arithmetic 2PC protocol with master
secret keys of their own, and issue independent key components to users during
the key issuing phase. The 2PC protocol deters them from knowing each others
master secrets so that none of them can generate the whole set of secret keys of
users individually. Thus, we take an assumption that the KGC does not collude
with the data-storing center since they are honest as in previous system.

Threat Model and Security Requirements:

1. Data confidentiality. Unauthorized users who do not have enough attribute


satisfying the access policy should be prevented from accessing the plaintext of
the data. Additionally, the KGC is no longer fully trusted in the data sharing
system. Thus, unauthorized access from the KGC as well as the data-storing
center to the plaintext of the encrypted data should be prevented.
2. Collusion resistance. Collusion resistance is one of the most important
security property required in ABE systems. If multiple users collude, they may
be able to decrypt a cipher text by combining their attributes even if each of the
users cannot decrypt the cipher text alone. We do not want these colluders to be
able to decrypt the private data in the server by combining their attributes. Since
we assume the KGC and data-storing center are honest, we do not consider any
active attacks from them by colluding with revoked users.
3. Backward and forward secrecy. In the context of attribute-based encryption,
backward secrecy means that any user who comes to hold an attribute (that
satisfies the access policy) should be prevented from accessing the plaintext of
the previous data distributed before he holds the attribute. On the other hand,
forward secrecy means that any user who drops an attribute should be prevented
from accessing the plaintext of the subsequent data distributed after he drops the
attribute, unless the other valid attributes that he is holding satisfy the access
policy.

PROPOSED CP-ABE SCHEME:

Since the first CP-ABE scheme proposed by Bethencourt et dozens of the


subsequent CP-ABE schemes have been suggested which are mostly motivated
by more rigorous security proof in the standard model. However, most of the
schemes failed to achieve the expressiveness of the Bethencourt et al.s scheme,
which described an efficient system that was expressive in that it allowed an
encryptor to express an access predicate in term of any monotonic formula over
attributes. Therefore, in this section, we develop a variation of the CP-ABE
algorithm partially based on (but not limited to) Bethencourt et al.s
construction in order to enhance the expressiveness of the access control policy
instead of building a new CP-ABE scheme from scratch. Its key generation
procedure is modified for our purpose of removing escrow. The proposed
scheme is then built on this new CP-ABE variation by further integrating it into
the proxy reencryption protocol for the user revocation. To handle the fine-
grained user revocation, the datastoring center must obtain the user access (or
revocation) list for each attribute group, since otherwise revocation cannot take
effect after all. This setting where the data-storing center knows the revocation
list does not violate the security requirements, because it is only allowed to
reencrypt the ciphertexts and can by no means obtain any information about the
attribute keys of users. Since the proposed scheme is built on [5], we
recapitulate some definitions in [5] to describe our construction in this section,
such as access tree, encrypt, and decrypt algorithm definitions.

OVERVIEW:

The cloud computing paradigm is the next generation architecture for the
business of information technology, which presents to its users some huge
benets in terms of computationalcosts, storage costs, bandwidth and
transmission costs [1]. Typically, the cloud technology transfers all the data,
databases and softwares over the Internet for the purpose of achieving huge cost
savings for the CSP. In cloud computing [1], services such as Infrastructure-as-a
Service (IaaS), Platform-as-a-Service (PaaS), Software-as-a-Service (SaaS), and
Database as-a-Service (DaaS) are currently oered. This thesis is concerned
with the DaS property of cloud computing, among which cloud data storages
services are of main interest. Dropbox, Mozy, and SpiderOak are examples of
popular cloud storages [1].With the advent of cloud computing and its digital
storage services, the growth of digital content has become irrepressible at both
the enterprise and individual levels. According to the EMC Digital Universe
Study [2], the global data supply has reached 2.8 trillion Giga Byte (GB) in
2012, but just 0.5% of it was used for various kind of analysis purposes. The
same study has also revealed that volumes of data are projected to reach about
5247 GB per person by 2020. Due to this explosive growth of digital data, there
is a clear demand from CSPs for more cost e ective use of their storage and
network bandwidth for data transfer purpose. Also the use of Internet and other
digital services have given rise to a digital data explosion, including those in
cloud storages. A survey [2] revealed that only 25% of the data in data
warehouses are unique in data warehouses. In the presence of this huge problem
of big data, it would be benecial in terms of storage savings if the replicated
data could be removed from the data storages. According to a survey [3], only
25 GB of the total data for each individual user are unique and the remaining
ones are similar shared data among
various users. On the enterprise level [4], it was reported that businesses hold an
average of three to ve copies of les, with 15% to 25% of these organizations
having more than 10 copies of the les. Keeping these facts in view, CSPs have
massively adopted data deduplication , a technique that allows the CSP to save
some storage space by storing only a single copy of previously duplicated data.
A major issue hindering the acceptance of cloud storage services by users is the
data privacy issue associated with the cloud paradigm. Indeed, although data is
outsourced in its encrypted form, there is no guarantee of data privacy when an
honest but curious CSP handles the management of condential data while these
data reside in the cloud. This problem is even more challenging when data
deduplication is performed by the CSP as a way to achieve cost savings. Data
deduplication is being adopted by many popular cloud storage vendors like
Dropbox, Spider Oak, Waula and Mozy. Recently, Dropbox [5] was reported to
have stolen millions of passwords of some users, but they shifted the blame to
the users by indicating that the hackers stole those passwords from some other
servers and then used them to access Dropbox. But whatever the case is, the
users cannot aord to compromise the security of their own data, therefore the
attack was perpetrated either by an anonymous external attacker or it was an
internal fault attributed to the CSP. In the light of above discussion, it is a dire
need of current big data and cloud computing paradigm to handle to issue of
data growth. Data deduplication is an eective technique exercised to handled
the issue. The primary condition is that the data deduplication should be
designed in accordance to the security and eciency requirements of the
system in consideration.
FUNCTIONAL REQUIREMENT:

Implementation is the stage of the project when the theoretical design is


turned out into a working system. Thus it can be considered to be the most
critical stage in achieving a successful new system and in giving the user,
confidence that the new system will work and be effective.

The implementation stage involves careful planning, investigation of


the existing system and its constraints on implementation, designing of
methods to achieve changeover and evaluation of changeover methods.
Main Modules:-
1. User Module:
In this module, Users are having authentication and
security to access the detail which is presented in the ontology
system. Before accessing or searching the details user should
have the account in that otherwise they should register first.
2. Secure DeDuplication System:

To support authorized deduplication, the tag of a file F will be determined by


the file F and the privilege. To show the difference with traditional notation of
tag, we call it file token instead. To support authorized access, a secret key kp
will be bounded with a privilege p to generate a file token. Let F;p =
TagGen(F, kp) denote the token of F that is only allowed to access by user with
privilege p. In another word, the token F;p could only be computed by the
users with privilege p. As a result, if a file has been uploaded by a user with a
duplicate token
F;p, then a duplicate check sent from another user will be successful if and only
if he also has the file F and privilege p. Such a token generation function could
be
easily implemented as H(F, kp), where H(_) denotes a cryptographic hash
function.
3. Security Of Duplicate Check Token :

We consider several types of privacy we need protect, that is, i)


unforgeability of duplicate-check token: There are two types of adversaries, that
is, external adversary and internal adversary. As shown below, the external
adversary
can be viewed as an internal adversary without any privilege. If a user has
privilege p, it requires that the adversary cannot forge and output a valid
duplicate token with any other privilege p on any file F, where p does not
match p. Furthermore, it also requires that if the adversary does not make a
request of token with its own privilege from private cloud server, it cannot forge
and output a valid duplicate token with p on any F that has been queried.
4. Send Key:

Once the key request was received, the sender can send the key or he can
decline it. With this key and request id which was generated at the time of
sending key request the receiver can decrypt the message.

PERFORMANCE REQUIREMENT:
The key criteria of any hybrid cloud storage solution involve basic data
retention and off-site failover of mission critical applications. Here are a few of
the absolute must-have requirements buyers should evaluate:

Management -- What tools or applications does the solution provide in order to


control access to the data? How would an administrator initiate a restore
operation or find a specific file? The storage service may use a management
interface, such as the Windows Azure Management portal where you can
manage blobs, tables and queues.
Security -- How are compliance mandates such as HIPAA or Sarbanes-Oxley
facilitated? Detailed logging of access to data is required in most cases as is a
secure erase capability for the deletion of files. For some types of data, such as
email archives, the ability to delete backups must be disabled.

Redundancy -- Does the solution provider offer geographic redundancy? What


about single points of failure such as network connectivity? What local network
requirements would be needed to mitigate outage risks including any new
infrastructure hardware costs?

Performance -- What speeds are required for efficient transfer of small and
large amounts of data? How do low-latency applications get the data they need
with the minimum amount of delay? While you won't typically get any latency
numbers form either Amazon or Microsoft, you shouldn't expect to run any
latency critical applications in a hybrid mode. That's different if you're doing
work directly on their platform such as between EC2 to S3 or within Azure.

Integration -- How well does the solution integrate with existing systems to
include Microsoft Active Directory or LDAP for user access control? Does the
solution integrate with existing management tools, such as Microsoft System
Center?

Service Level Agreements -- How are SLAs established, and what levels are
possible? A key requirement that businesses rely on when using cloud services
is guarantee of availability. Microsoft's Windows Azure Backup service
advertises a 99.9 percent availability rate as does Amazon's S3 service. If a
disruption in service should happen, then part of the agreement usually includes
some form of credit given back to the customer. Amazon offers a service credit
should availability fall below that availability level over a one-month period.
Application Specific -- Some applications, like "big data" or disaster recovery,
may have specific requirements or performance levels. How do hybrid cloud
storage providers address this?

Pricing -- The cost of using hybrid cloud storage is based on the service
components used and the storage itself. For example, Amazon's Simple Store
Service (S3) Gateway costs include a base monthly charge plus a cost per GB of
data stored. They also charge you for bandwidth used to retrieve any data from
their storage. On the other hand, Microsoft offers a flat fee per GB of storage
which you can compute based on their cost information.

SOFTWARE INTERFACE:

S/W System Configuration:-

Operating System : Windows95/98/2000/XP

Application Server : Tomcat5.0/6.X

IDE : Android IDE

Developer Tool : Emulator

Front End : HTML, Java, Jsp

Scripts : JavaScript.

Server side Script : Java Server Pages.

Database : Mysql 5.0

Database Connectivity : JDBC.


THE BIRTH OF ANDROID
Google Acquires Android Inc.
In July 2005, Google acquired Android Inc., a small startup company
based in Palo Alto, CA. Android's co-founders who went to work at Google
included Andy Rubin (co-founder of Danger), Rich Miner (co-founder of
Wildfire Communications, Inc), Nick Sears (once VP at T-Mobile), and Chris
White (one of the first engineers at WebTV). At the time, little was known about
the functions of Android Inc. other than they made software for mobile phones.

2.1.2 Open Handset Alliance Founded


On 5 November 2007, the Open Handset Alliance, a consortium of
several companies which include Google, HTC, Intel, Motorola, Qualcomm, T-
Mobile, Sprint Nextel and NVIDIA, was unveiled with the goal to develop open
standards for mobile devices. Along with the formation of the Open Handset
Alliance, the OHA also unveiled their first product, Android, an open source
mobile device platform based on the Linux operating system.
2.1.3 Hardware
Google has unveiled at least three prototypes for Android, at the Mobile
World Congress on February 12, 2008. One prototype at the ARM booth
displayed several basic Google applications. A 'd-pad' control zooming of items
in the dock with a relatively quick response.

Chapter 3
3.1 Features of Android OS
Application framework enabling reuse and replacement of
components

Dalvik virtual machine optimized for mobile devices

Integrated browser based on the open source WebKit engine

Optimized graphics powered by a custom 2D graphics library; 3D


graphics based on the OpenGL ES 1.0 specification (hardware
acceleration optional)

SQLite for structured data storage

Media support for common audio, video, and still image formats
(MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, GIF)

GSM Telephony (hardware dependent)

Bluetooth, EDGE, 3G, and WiFi (hardware dependent)

Camera, GPS, compass, and accelerometer (hardware dependent)

Rich development environment including a device emulator, tools


for debugging, memory and performance profiling, and a plugin for
the Eclipse IDE
Chapter 4

4.1 Android Architecture

The following diagram shows the major components of Android

Figure 1: Architecture of Android OS


4.1.1 Application Framework

Developers have full access to the same framework APIs used by the core
applications. The application architecture is designed to simplify the reuse of
components; any application can publish its capabilities and any other
application may then make use of those capabilities (subject to security
constraints enforced by the framework). This same mechanism allows
components to be replaced by the user.
Underlying all applications is a set of services and systems, including:
A rich and extensible set of Views that can be used to build an
application, including lists, grids, text boxes, buttons, and even an embeddable
web browser
Content Providers that enable applications to access data from other
applications (such as Contacts), or to share their own data
A Resource Manager, providing access to non-code resources such as
localized strings, graphics, and lat files
A Notification Manager that enables all applications to display custom
alerts in the status bar
An Activity Manager that manages the life cycle of applications and
provides a common navigation backstack
4.1.2 Libraries

Android includes a set of C/C++ libraries used by various components of the


Android system. These capabilities are exposed to developers through the
Android application framework. Some of the core libraries are listed below:
System C library - a BSD-derived implementation of the standard C
system library (libc), tuned for embedded Linux-based devices
Media Libraries - based on PacketVideo's Open CORE; the libraries
support playback and recording of many popular audio and video formats, as
well as static image files, including MPEG4, H.264, MP3, AAC, AMR, JPG,
and PNG
Surface Manager - manages access to the display subsystem and
seamlessly composites 2D and 3D graphic layers from multiple applications
LibWebCore - a modern web browser engine which powers both the
Android browser and an embeddable web view
SGL - the underlying 2D graphics engine
3D libraries - an implementation based on OpenGL ES 1.0 APIs; the
libraries use either hardware 3D acceleration (where available) or the included,
highly optimized 3D software rasterizer
Free Type - bitmap and vector font rendering
SQLite - a powerful and lightweight relational database engine available to all
applications.
4.1.3 Android Runtime

Android includes a set of core libraries that provides most of the


functionality available in the core libraries of the Java programming language.
Every Android application runs in its own process, with its own instance of the
Dalvik virtual machine. Dalvik has been written so that a device can run
multiple VMs efficiently. The Dalvik VM executes files in the Dalvik
Executable (.dex) format which is optimized for minimal memory footprint. The
VM is register-based, and runs classes compiled by a Java language compiler
that have been transformed into the .dex format by the included "dx" tool. The
Dalvik VM relies on the Linux kernel for underlying functionality such as
threading and low-level memory management.
At the same level there is Android Runtime, where the main component
Dalvik Virtual Machine is located. It was designed specifically for Android
running in limited environment, where the limited battery, CPU, memory and
data storage are the main issues. Android gives an integrated tool dx, which
converts generated byte code from .jar to .dex file, after this byte code becomes
much more efficient to run on the small processors.

Figure 2: Conversion from .java to .dex file


As the result, it is possible to have multiple instances of Dalvik virtual
machine running on the single device at the same time. The Core libraries are
written in Java language and contains of the collection classes, the utilities, IO
and other tools.
4.1.4 Linux Kernal

Android Architecture is based on Linux 2.6 kernel. It helps to manage


security, memory management, process management, network stack and other
important issues. Therefore, the user should bring Linux in his mobile device as
the main operating system and install all the drivers required in order to run it.
Android provides the support for the Qualcomm MSM7K chipset family. For
instance, the current kernel tree supports Qualcomm MSM 7200A chipsets, but
in the second half of 2008 we should see mobile devices with stable version
Qualcomm MSM 7200, which includes major features:

1. WCDMA/HSUPA and EGPRS network support

2. Bluetooth 1.2 and Wi-Fi support

3. Digital audio support for mp3 and other formats

4. Support for Linux and other third-party operating systems

5. Java hardware acceleration and support for Java applications

6. Qcamera up to 6.0 megapixels

7. gpsOne solution for GPS


Chapter 5
5.1 Architecture for Secure Data Storage

Figure 3 Android and Secure Local Data Storage

Secure data storage solution that could potentially be deployed on


Android. It is as shown in the figure. However, many shortcomings of the
design have been addressed. Additional security highlights will be presented at
the end of the section.
Using figure 3, we have the following workflow:
1. The user enters his credentials on the handset.
2. The credentials are not sent to the SSO service over the network. Instead,
the credentials are used as the passphrase to decrypt the local
public/private key pair of the user. We define the public/private key pair to
be of type RSA and of at least 4096 bits in size. Already we gain the
advantage that the users password is not sent over the network.
3. The private key is used to decrypt the symmetric cipher key. The
symmetric cipher key is used to encrypt/decrypt any locally cached data.
A strong symmetric cipher like 3DES is used.
4. All data found in the local cache is encrypted with the symmetric cipher
key defined in step #3.
5. If the requested data is not locally cached or expired. We must
communicate with the SSO service again to be able to receive fresh data
from the Restful web services. However, unlike the architecture presented
in section 2 of this document, we login to the SSO server using a hostile
challenge based on the private key of the user. As such, we login with the
SSO system using public/private key infrastructure. The user name and
the password are never sent over the network. The SSO system can
identify the user based on this challenge and returns a 496 bit alpha
numeric token.
6. The tokens generated by the SSO system are set to automatically expire
after a given period of time.
7. On reception of the SSO token. The Android background application can
now communicate with any Restful web services that adhere to the same
SSO federation. Public/private key infrastructure is once again used to
setup a secure communication channel between the phone and the server.
The certificates of the servers that host the web services are procured from
the same certificate authority that shipped with the phone.
8. On reception of a request, the SSO token is extracted from the request.
The web service calls upon the SSO system to authorize the operation.
9. On reception of the data, the symmetric cipher described in bullet #3
above is used to encrypt the data before it reaches any local persistent
storehouse.
10.Data is returned to the user facing application.

Additional security notes:


1. The public/private key pair of the user is generated directly on the
handset at install time. As such, the private key has never left the phone nor has
it been transferred over any network.
2. The certificate of the user must at least be registered once in the SSO
application. This could be done at install time of the handset application.
3. Man-in-the-middle38 attacks are not possible since the application is
deployed with the CA certificate of the company that will be hosting the web
services.
4. If the device is lost, all the locally cached data is completely
unreadable. The symmetric key that encrypted this data is also unreadable. The
public/private keys that are central to the security architecture are protected by a
passphrase.
5. The passphrase is the weakest link in the chain. If the user enters an
overly simple password, access could be gained to the private key and hence the
locally cached data.
6. That being said, it would be possible to further extend this architecture
to store the encrypted symmetric key on the server. This way, even if the
passphrase of the private key is compromised, the locally cached data still
cannot be accessed. This is because the encrypted strong symmetric cipher key
is stored on the server. By the time the passphrase has been cracked, there has
been ample time to report the stolen phone and revoke this key from this user
account on the server. Furthermore, under this scheme, the key stored on the
server is still encrypted. Even if this key is intercepted in transit it is useless
without the users private key.
7. It is also possible to enforce a strong password policy directly from the
handset application.
8. Even if this design is significantly more secure than the previous
iteration, to the user, the experience is the same. The user must enter a username
and password to prove his identify.
9. We could augment the architecture in yet another direction. The local
caching system could also require an SSO token and subsequently request
authorization from an SSO system. Such a design would prevent terminated
employees, i.e., an Individual who already knows what the local credentials are,
from accessing the locally cached data.
Chapter 6
6.1 Execution Environment

Figure 4 Regular Java Execution Process


Figure 5 Android Execution Environment
Figures 4 and 5 represent the regular Java and Android execution paths
respectively. It is interesting to note here however is that the Android compilers
do not operate on Java
language code. Instead, the Android translators work on the resulting Java
bytecode emitted from a traditional Java compiler.
As such, it is possible to reuse existing Java libraries, even if the original
source code is not available. Such libraries must meet stringent requirements
however, they need to:
1. adhere to the Java SE 5 dialect
2. not use any Java classes or packages found in Java SE 5 not found in
the Android platform
3. not use any packages or classes specific to the Sun Microsystems
platform
4. still behave in a predictable manner under the Apache Harmony Java
environment

Following these guidelines, its possible to integrate existing Java source


code, packages and libraries piecemeal. Special care will be needed in the
integration phase of such code but the potential savings offered by such
integration far outweighs the cost of rewriting well-coded, well-documented and
well-tested libraries ready for use. Furthermore, it is expected that has Apache
Harmony matures, more and more compatibility issues will be resolved further
increasing the pool of available Java code that will be able to execute
unmodified under the Android platform.
6.2 The Dalvik Virtual Machine

The Dalvik Virtual Machine


The Dalvik virtual machine is an interpreter only machine optimized for
use on low powered, low memory devices like phones. Notably, Dalvik does not
make use of just in time (JIT) Compilation to improve the performance of an
application at runtime. Furthermore, Dalvik is not a Java virtual machine. This
is because Dalvik is unable to read Java bytecode34, instead it uses its own
bytecode format called dex. Google claims this format allows battery power
to be better-conserved at all different stages of execution of an application. This
means that standard Java SE applications and libraries cannot be used directly
on the Android Dalvik virtual machine.
Dalvik however stands at the center of the Android value proposition. Its
low electrical power consumption, rich libraries, and unified, non-fragmented
application programming interfaces make it stand out, or so Google hopes, over
the fragmented ecosystem that is Java ME35 today.
Furthermore, since Dalvik uses the Java programming language but not
the Java execution environment (JVM), Google is free to develop Android
without the need to license or obtain certification from Sun Microsystems Inc,
the legal owner of the Java trademark and brands.
Chapter 7
7.1 Lifecycle of an Android Application

In most cases, every Android application runs in its own Linux process.
This process is created for the application when some of its code needs to be
run, and will remain running until it is no longer needed and the system needs to
reclaim its memory for use by other applications.
An important and unusual feature of Android is that an application
process's lifetime is not directly controlled by the application itself. Instead, it is
determined by the system through a combination of the parts of the application
that the system knows are running, how important these things are to the user,
and how much overall memory is available in the system.
It is important that application developers understand how different
application components (in particular Activity, Service, and IntentReceiver)
impact the lifetime of the application's process. Not using these components
correctly can result in the system killing the application's process while it is
doing important work.
A common example of a process life-cycle bug is an IntentReceiver that
starts a thread when it receives an Intent in its onReceiveIntent() method, and
then returns from the function. Once it returns, the system considers that
IntentReceiver to be no longer active, and thus its hosting process no longer
needed (unless other application components are active in it). Thus, it may kill
the process at any time to reclaim memory, terminating the spawned thread that
is running in it. The solution to this problem is to start a Service from the
IntentReceiver, so the system knows that there is still active work being done in
the process.
To determine which processes should be killed when low on memory,
Android places them into an "importance hierarchy" based on the components
running in them and the state of those components. These are, in order of
importance:
1. A foreground process is one holding an Activity at the top of the
screen that the user is interacting with (its onResume () method has been called)
or an IntentReceiver that is currently running (its onReceiveIntent () method is
executing). There will only ever be a few such processes in the system, and
these will only be killed as a last resort if memory is so low that not even these
processes can continue to run. Generally at this point the device has reached a
memory paging state, so this action is required in order to keep the user
interface responsive.
2. A visible process is one holding an Activity that is visible to the user
on-screen but not in the foreground (its onPause() method has been called). This
may occur, for example, if the foreground activity has been displayed with a
dialog appearance that allows the previous activity to be seen behind it. Such a
process is considered extremely important and will not be killed unless doing so
is required to keep all foreground processes running.
3. A service process is one holding a Service that has been started with
the startService() method. Though these processes are not directly visible to the
user, they are generally doing things that the user cares about (such as
background mp3 playback or background network data upload or download), so
the system will always keep such processes running unless there is not enough
memory to retain all foreground and visible process.
4. A background process is one holding an Activity that is not
currently visible to the user (its onStop() method has been called). These
processes have no direct impact on the user experience. Provided they
implement their activity life cycle correctly (see Activity for more details), the
system can kill such processes at any time to reclaim memory for one of the
three previous processes types. Usually there are many of these processes
running, so they are kept in an LRU list to ensure the process that was most
recently seen by the user is the last to be killed when running low on memory.
5. An empty process is one that doesn't hold any active application
components. The only reason to keep such a process around is as a cache to
improve startup time the next time a component of its application needs to run.
As such, the system will often kill these processes in order to balance overall
system resources between these empty cached processes and the underlying
kernel caches.
When deciding how to classify a process, the system picks the most
important level of all the components currently active in the process.
7.2 Security and Permissions in Android

Android is a multi-process system, where each application (and parts of


the system) runs in its own process. Most security between applications and the
system is enforced at the process level through standard Linux facilities, such as
user and group IDs that are assigned to applications. Additional finer-grained
security features are provided through a "permission" mechanism that enforces
restrictions on the specific operations that a particular process can perform.
Android mobile phone platform is going to be more secure than Apples
iPhone or any other device in the long run. There are several solutions
nowadays to protect Google phone from various attacks. One of them is security
vendor McAfee, a member of Linux Mobile (LiMo) Foundation. This
foundation joins particular companies to develop an open mobile-device
software platform. Many of the companies listed in the LiMo Foundation have
also become members of the Open Handset Alliance (OHA).
As a result, Linux secure coding practice should successfully be built into
the Android development process. However, open platform has its own
disadvantages, such as source code vulnerability for black-hat hackers. In
parallel with great opportunities for mobile application developers, there is an
expectation for exploitation and harm. Stealthy Trojans hidden in animated
images, particular viruses passed from friend to friend, used for spying and
identity theft, all these threats will be active for a long run.
Another solution for such attacks is SMobile Systems mobile package.
Security Shield an integrated application that includes anti-virus, anti-spam,
firewall and other mobile protection is up and ready to run on the Android
operating system. Currently, the main problem is availability for viruses to pose
as an application and do things like dial phone numbers, send text messages or
multi-media messages or make connections to the Internet during normal device
use. It is possible for somebody to use the GPS feature to track a persons
location without their knowledge. Hence SMobile Systems is ready to notify
and block these secure alerts. But the truth is that it is not possible to secure r
mobile device or personal computer completely, as it connects to the internet.
And neither the Android phone nor other devices will prove to be the exception.
7.3 Development Tools

The Android SDK includes a variety of custom tools that help develop
mobile applications on the Android platform. The most important of these are
the Android Emulator and the Android Development Tools plugin for Eclipse,
but the SDK also includes a variety of other tools for debugging, packaging, and
installing r applications on the emulator.
Android Emulator
A virtual mobile device that runs on computer use the emulator to design,
debug, and test r applications in an actual Android run-time environment.
Android Development Tools Plugin for the Eclipse IDE
The ADT plugin adds powerful extensions to the Eclipse integrated
environment, making creating and debugging r Android applications easier and
faster. If use Eclipse, the ADT plugin gives an incredible boost in developing
Android applications:
It gives access to other Android development tools from inside the
Eclipse IDE. For example, ADT lets access the many capabilities of the DDMS
tool taking screenshots, managing port-forwarding, setting breakpoints, and
viewing thread and process information directly from Eclipse.
It provides a New Project Wizard, which helps quickly create and set
up all of the basic filesll need for a new Android application.
It automates and simplifies the process of building r Android
application.
It provides an Android code editor that helps write valid XML for r
Android manifest and resource files.

Dalvik Debug Monitor Service (ddms)


Integrated with Dalvik, the Android platform's custom VM, this tool lets
manage processes on an emulator or device and assists in debugging. can use it
to kill processes, select a specific process to debug, generate trace data, view
heap and thread information, take screenshots of the emulator or device, and
more.
Android Debug Bridge (adb)
The adb tool lets install application's .apk files on an emulator or device
and access the emulator or device from a command line. can also use it to link a
standard debugger to application code running on an Android emulator or
device.
Android Asset Packaging Tool (aapt)
The aapt tool lets create .apk files containing the binaries and resources of
Android applications.
Android Interface Description Language (aidl)
Aidl Lets generate code for an interprocess interface, such as what a
service might use.
sqlite3
Included as a convenience, this tool lets access the SQLite data files
created and used by Android applications.
Trace view
This tool produces graphical analysis views of trace log data that can
generate from r Android application.
mksdcard
Helps create a disk image that can use with the emulator, to simulate the
presence of an external storage card (such as an SD card).
dx
The dx tool rewrites .class bytecode into Android bytecode (stored in .dex
files.)
activityCreator
A script that generates Ant build files that can use to compile r Android
applications. If are developing on Eclipse with the ADT plugin, won't need to
use this script.
Conclusion

Android is a truly open, free development platform based on Linux and


open source. Handset makers can use and customize the platform without
paying a royalty.
A component-based architecture inspired by Internet mash-ups. Parts of
one application can be used in another in ways not originally envisioned by the
developer. can even replace built-in components with own improved versions.
This will unleash a new round of creativity in the mobile space.
Android is open to all: industry, developers and users

Participating in many of the successful open source projects

Aims to be as easy to build for as the web.

Google Android is stepping into the next level of Mobile Internet

HARDWARE INTERFACE:

H/W System Configuration:-

Processor - Pentium III


Speed - 1.1 Ghz

RAM - 256 MB (min)

Hard Disk - 20 GB

Floppy Drive - 1.44 MB

Key Board - Standard Windows Keyboard


Mouse - Two or Three Button Mouse

Monitor - SVGA

Interfaces

This interface was deprecated in API level


21. We recommend using the new
Camera.AutoFocusCallback
android.hardware.camera2 API for new
applications.
This interface was deprecated in API level
21. We recommend using the new
Camera.AutoFocusMoveCallback
android.hardware.camera2 API for new
applications.
This interface was deprecated in API level
21. We recommend using the new
Camera.ErrorCallback
android.hardware.camera2 API for new
applications.
This interface was deprecated in API level
21. We recommend using the new
Camera.FaceDetectionListener
android.hardware.camera2 API for new
applications.
This interface was deprecated in API level
21. We recommend using the new
Camera.OnZoomChangeListener
android.hardware.camera2 API for new
applications.
This interface was deprecated in API level
21. We recommend using the new
Camera.PictureCallback
android.hardware.camera2 API for new
applications.
This interface was deprecated in API level
21. We recommend using the new
Camera.PreviewCallback
android.hardware.camera2 API for new
applications.
This interface was deprecated in API level
21. We recommend using the new
Camera.ShutterCallback
android.hardware.camera2 API for new
applications.
Used for receiving notifications from the
SensorEventListener SensorManager when there is new sensor
data.
Used for receiving a notification when a
SensorEventListener2
flush() has been successfully completed.
This interface was deprecated in API level 3.
SensorListener
Use SensorEventListener instead.
Classes

This class was deprecated in API


level 21. We recommend using
Camera the new
android.hardware.camera2 API
for new applications.
This class was deprecated in API
level 21. We recommend using
Camera.Area the new
android.hardware.camera2 API
for new applications.
This class was deprecated in API
level 21. We recommend using
Camera.CameraInfo the new
android.hardware.camera2 API
for new applications.
This class was deprecated in API
level 21. We recommend using
Camera.Face the new
android.hardware.camera2 API
for new applications.
This class was deprecated in API
level 21. We recommend using
Camera.Parameters the new
android.hardware.camera2 API
for new applications.
This class was deprecated in API
level 21. We recommend using
Camera.Size the new
android.hardware.camera2 API
for new applications.
Class that operates consumer
ConsumerIrManager
infrared on the device.
Represents a range of carrier
frequencies (inclusive) on which
ConsumerIrManager.CarrierFrequencyRange
the infrared transmitter can
transmit
GeomagneticField Estimates magnetic field at a
given point on Earth, and in
particular, to compute the
magnetic declination from true
north.
Sensor Class representing a sensor.
This class represents a Sensor
additional information frame,
SensorAdditionalInfo which is reported through listener
callback
onSensorAdditionalInfo.
This class represents a Sensor
event and holds information such
SensorEvent as the sensor's type, the time-
stamp, accuracy and of course the
sensor's data.
Used for receiving sensor
SensorEventCallback
additional information frames.
SensorManager lets you access
SensorManager the device's sensors.

Used for receiving notifications


from the SensorManager when
SensorManager.DynamicSensorCallback
dynamic sensors are connected or
disconnected.
This class represents a Trigger
TriggerEvent
Event - the eve

S-ar putea să vă placă și