Documente Academic
Documente Profesional
Documente Cultură
File System
System Performer
Charithardha. CH
7/1/2009
Introduction
Abstract: Hard disk is configured with blocks. These blocks are the storage
points of the files. The size of the file will be depends on the blocks in
operating system. The files which take the blocks non contiguously will be
called fragmented file. File Fragmentation should be a stand-alone
application to split large files into its smaller constituents and to combine files
into large files. Split; The system should be able to split the source files into
given fragments and save the fragments in the target file. In order to create
the files into fragments user should select the following: Select source file
such as .Java files. Select target (destination). Select fragments from the
drop-down list (MB, KB, Bytes). Stop: The application should allow the user to
stop in the middle of fragmentation. Exit: The system should allow the user
to exit from the application. Help: The system should provide help to user of
the application. About: The system should provide help to user of the
application. Rejoin The system should allow the user to rejoin the fragments
of one or more files.
Solution: The fragmented files should be identified and the files have to be
defragmented at a destination location. The files from the source location
have to be shifted to targeted location with contiguous blocks occupation.
The application has to provide the utility to select the file from source
destination and save the files at targeted file. The application should provide
the facility to stop the defragmentation of files. The application should allow
the user to exit from the application.
This application can be used by all programmers, technical writers and other
document writers who wants to keep the files in a secured state. Disk
Defragmenter is a system utility for analyzing local volumes, locating
and consolidating fragmented files and folders. Hard disk is configured
with blocks. These blocks are the storage points of the files. The file
which takes the blocks non contiguously will be called fragmented file.
File Fragmentation should be a stand-alone application to split large
files into its smaller constituents and to combine files into large files.
So rearranging the blocks of a file contiguously is known as
Defragmentation.
System Analysis:
Problem Statement:
As advanced as hard drives have become, one item they are not
very good at is housekeeping, or maybe that should be drive keeping.
When files are created, deleted, or modified it's almost a certainty they
will become fragmented. Fragmented simply means the file is not
stored in one place in its entirety, or what computer folks like to call a
contiguous location. Different parts of the file are scattered across the
hard disk in noncontiguous pieces. The more fragmented files there are
on a drive, the more performance and reliability suffer as the drive
heads have to search for all the pieces in different locations.
A new disk has had 5 files saved on it, named A, B, C, D and E, and
each file is using 10 blocks of space (here the block size is
unimportant.) As the free space is contiguous the files are located one
after the other (Example (1).)
Now if a new file F requires 7 blocks of space, it can be placed into the
first 7 blocks of the space formerly holding the file B, and the 3 blocks
following it will remain available (Example (3).) If another new file G is
added, and needs only three blocks, it could then occupy the space
after F and before C (Example (4).)
If subsequently F needs to be expanded, since the space immediately
following it is occupied, there are three options: (1) add a new block
somewhere else and indicate that F has a second extent, (2) move files
in the way of the expansion elsewhere, to allow F to remain
contiguous; or (3) move file F so it can be one contiguous file of the
new, larger size. The second option is probably impractical for
performance reasons, as is the third when the file is very large. Indeed
the third option is impossible when there is no single contiguous free
space large enough to hold the new file. Thus the usual practice is
simply to create an extent somewhere else and chain the new extent
onto the old one (Example (5).)
Material added to the end of file F would be part of the same extent.
But if there is so much material that no room is available after the last
extent, then another extent would have to be created, and so on, and
so on. Eventually the file system has free segments in many places
and some files may be spread over many extents. Access time for
those files (or for all files) may become excessively long.
File fragmentation
Individual file fragmentation occurs when a single file has been broken
into multiple pieces (called extents on extent-based file systems).
While disk file systems attempt to keep individual files contiguous, this
is not often possible without significant performance penalties. File
system check and defragmentation tools typically only account for file
fragmentation in their "fragmentation percentage" statistic.
Free space fragmentation
Free (unallocated) space fragmentation occurs when there are several
unused areas of the file system where new files or metadata can be
written to. Unwanted free space fragmentation is generally caused by
deletion or truncation of files, but file systems may also intentionally
insert fragments ("bubbles") of free space in order to facilitate
extending nearby files (see preemptive techniques below)
File scattering
File segmentation, also called related-file fragmentation, or application-
level (file) fragmentation, refers to the lack of locality of reference
(within the storing medium) between related files (see file sequencess
for more detail). Unlike the previous two types of fragmentation, file
scattering is a much more vague concept, as it heavily depends on the
access pattern of specific applications. This also makes objectively
measuring or estimating it very difficult. However, arguably, it is the
most critical type of fragmentation, as studies have found that the
most frequently accessed files tend to be small compared to available
disk throughput per second.[6]
To avoid related file fragmentation and improve locality of reference (in
this case called file contiguity), assumptions about the operation of
applications have to be made. A very frequent assumption made is
that it is worthwhile to keep smaller files within a single directory
together, and lay them out in the natural file system order. While it is
often a reasonable assumption, it does not always hold. For example,
an application might read several different files, perhaps in different
directories, in the exact same order they were written. Thus, a file
system that simply orders all writes successively, might work faster for
the given application.
Purpose:
Preemptive techniques
The HFS Plus file system transparently defragments files that are less
than 20 MiB in size and are broken into 8 or more fragments, when the
file is being opened.[10]
Stateless techniques
Objective:
Existing System:
There is an inbuilt Disk Defragmenter Utility provided by the windows
Operating System. The Disk Defragmenter Utility is designed to reorganize
noncontiguous files into contiguous files and optimize their placement on the
hard drive for increased reliability and performance.
Disk Defragmenter can be opened a number of different ways. The
most common methods are listed below.
• Start | All Programs | Accessories | System Tools | Disk Defragmenter
• Start | Run | and type dfrg.msc in the Open line. Click OK
• Start | Administrative Tools | Computer Management. Expand Storage
and select Disk Defragmenter.
Proposed System:
This system is a java application which identifies the fragmented files
(non contiguous) in the hard disk and reorganizes memory contiguously for
them. It also collects all the free space available in between the fragmented
files and integrates them together to make free memory blocks. As the files
are defragmented, it speeds up the file access to the user. Therefore
improves the performance of the system.
Analyzer: Analyzes a particular drive and sends the report to view report
module. program allowing you to defragment selected files and directories.
Regular defragmentation increases the overall performance of your system
dramatically. The key advantage of the Rapid File Defragmentor is the ability
to group files and folders into the profiles and defragment only selected.
Feasibility Study:
This application is feasible to deploy in any environment deployed
systems. This application is easy to deploy and easy to run on any
windows environment. This project is feasible to any organization
because of the cost of this project is less than the consumption of the
application. This application can be used for all client systems and
server. The feasible study of defragmented File system is emphasized
on the healthy status of the systems.
• Technical Feasibility:
• Economic Feasibility:
• Operational Feasibility:
• Schedule feasibility
A project will fail if it takes too long to be completed before it is useful.
Typically this means estimating how long the system will take to develop, and
if it can be completed in a given time period using some methods like
payback period. Schedule feasibility is a measure of how reasonable the
project timetable is. Given our technical expertise, are the project deadlines
reasonable? Some projects are initiated with specific deadlines. You need to
determine whether the deadlines are mandatory or desirable.
Functional Requirements:
Functional Requirements:
The application should not only reveal the hardware defects of the
clients system but also the system is on or shutdown status.
Process
1. Business modeling:
2. Data modeling:
3. Process modeling:
4. Application generation:
The RAD model assumes the use of the RAD tools like VB, VC++,
Delphi etc... rather than creating software using conventional third generation
programming languages. The RAD model works to reuse existing program
components (when possible) or create reusable components (when
necessary). In all cases, automated tools are used to facilitate construction of
the software.
RAD ARCHITECTURE(fig.4)
• Software Requirements
• Hardware Requirements
SOFTWARE REQUIREMENTS
This project is a stand alone application hence we used JSP, Servlets, Java Scripts.
Processor : Pentium 4
RAM : 512 MB
Hard Disk : 40 GB
ISpelling
Chain of Responsibility
Collaborations.
Place Order
Use Cases
• Packages
- one primary kind of grouping.
- Can be nested.
2.1 Dependency
2.2 Association
2.3 Generalization
2.4Realization
2.1 Dependency:
A semantic relationship between two things in which a change to one
thing (independent) may affect the semantics of the other thing (dependent).
Graphically, a dependency is rendered as a dashed line, possibly directed,
and occasionally including a label.
2.2 Association:
An association is a structural relationship that describes a set of links,
a link being a connection among objects. Aggregation is a special kind of
association, representing a structural relationship between a whole and its
parts. Graphically, an association is rendered as a solid line, possibly directed,
occasionally including a label, and often containing other adornments.
0.1 *
Employer Employee
2.3 Generalization:
A specialization/generalization relationship in which objects of the
specialized element (the child) are more specific than the objects of the
generalized element. Graphically, a generalization relationship is rendered as
a solid line with a hollow arrowhead pointing to the parent.
2.4 Realization:
A semantic relationship between two elements, wherein one element
guarantees to carry out what is expected by the other element. Graphically, a
realization is rendered as a cross between a generalization and a dependency
relationship.
System
Draw your system's boundaries using a rectangle that contains use cases.
Use Case
Draw use cases using ovals. Label with ovals with verbs that represent the
system's functions.
Actors
Actors are the users of a system. When one system is the actor of another
system, label the actor system with the actor stereotype.
Relationships
Illustrate relationships between an actor and a use case with a simple line.
For relationships among use cases, use arrows labeled either "uses" or
"extends." A "uses" relationship indicates that one use case is needed by
another in order to perform a task. An "extends" relationship indicates
alternative options under a certain use case.
Testing
Testing
Testing is the process of detecting errors. Testing performs a very
critical role for quality assurance and for ensuring the reliability of
software. The results of testing are used later on during maintenance
also. In the test phase various test cases intended to find the bugs and
loop holes exist in the software will be designed. During testing, the
program to be tested is executed with a set of test cases and the
output of the program is performing as it is expected to.
Often when we test our program, the test cases are treated
as “ throw away” cases. After testing is complete, test cases and their
outcomes are thrown away. The main objective of testing is to find
errors if any, especially the error uncovered till the moment. Testing
cannot show the absence of defects it can only show the defects that
are a set of interesting test cases along with their expected output for
future use.
Software testing is crucial element and it represents at the
ultimate review of specification design and coding. There are black box
testing and glass box testing. When the complete software testing is
considered Back box attitudes to the tests. That is concluded predicted
on a close examination of procedural detail.
The software is tested using control structures testing
method under white box testing techniques. The two tests done under
this approach. One condition testing to check the Boolean operator
errors, Boolean variable errors, Boolean parenthesis errors etc. Loop
testing to check simple loops and tested loops.
Faults can be occurred during any phase in the software
development cycle. Verification is performed on the output in each phase but
still some fault. We likely to remain undetected by these methods. These
faults will be eventually reflected in the code. Testing is usually relied upon to
detect these defaults in addition to the fault introduced during the code
phase .For this, different levels of testing are which perform different tasks
and aim to test different aspects of the system.
Psychology of Testing
The aim of testing is often to demonstrate that a program works by showing
that it has no errors. The basic purpose of testing phase is to detect the
errors that may be present in the program. Hence one should not start testing
with the intent of showing that a program works, but the intent should be to
show that a program doesn’t work. Testing is the process of executing a
program with the intent of finding errors.
Testing Objectives
The main objective of testing is to uncover a host of errors,
systematically and with minimum effort and time. Stating formally, we
can say,
➢ Testing is a process of executing a program with the intent of
finding an error.
➢ A successful test is one that uncovers an as yet undiscovered error.
➢ A good test case is one that has a high probability of finding error, if
it exists.
➢ The tests are inadequate to detect possibly present errors.
➢ The software more or less confirms to the quality and reliable
standards.
Levels of Testing
In order to uncover the errors present in different phases we have the
concept of levels of testing. The basic levels of testing are as shown
below…
Client Needs
Requirements
Design
Code
System Testing
The philosophy behind testing is to find errors. Test cases are devised
with this in mind. A strategy employed for system testing is code
testing.
Code Testing:
This strategy examines the logic of the program. To follow this method
we developed some test data that resulted in executing every
instruction in the program and module i.e. every path is tested.
Systems are not designed as entire nor are they tested as single
systems. To ensure that the coding is perfect two types of testing is
performed or for that matter is performed or that matter is performed
or for that matter is performed on all systems.
Types Of Testing
➢ Unit Testing
➢ Link Testing
Unit Testing
Unit testing focuses verification effort on the smallest unit of software
i.e. the module. Using the detailed design and the process
specifications testing is done to uncover errors within the boundary of
the module. All modules must be successful in the unit test before the
start of the integration testing begins.
In this project each service can be thought of a module. There are so
many modules like Login, HWAdmin, MasterAdmin, Normal User, and
PManager. Giving different sets of inputs has tested each module.
When developing the module as well as finishing the development so
that each module works without any error. The inputs are validated
when accepting from the user.
Interfa ce
Independent path
Link Testing
Link testing does not test software but rather the integration of each
module in system. The primary concern is the compatibility of each
module. The Programmer tests where modules are designed with
different parameters, length, type etc.
Integration Testing
After the unit testing we have to perform integration testing. The goal
here is to see if modules can be integrated proprerly, the emphasis
being on testing interfaces between modules. This testing activity can
be considered as testing the design and hence the emphasis on testing
module interactions.
In this project integrating all the modules forms the main system.
When integrating all the modules I have checked whether the
integration effects working of any of the services by giving different
combinations of inputs with which the two services run perfectly before
Integration.
System Testing
Here the entire software system is tested. The reference document for
this process is the requirements document, and the goal os to see if
software meets its requirements.
Here entire ‘ATM’ has been tested against requirements of project and
it is checked whether all requirements of project have been satisfied or
not.
Acceptance Testing
Acceptance Test is performed with realistic data of the client to
demonstrate that the software is working satisfactorily. Testing here is
focused on external behavior of the system; the internal logic of
program is not emphasized.
In this project ‘Network Management Of Database System’ I have
collected some data and tested whether project is working correctly or
not.
Test cases should be selected so that the largest number of attributes
of an equivalence class is exercised at once. The testing phase is an
important part of software development. It is the process of finding
errors and missing operations and also a complete verification to
determine whether the objectives are met and the user requirements
are satisfied.
1) Test cases that reduced by a count that is greater than one, the
number of additional test cases that much be designed to achieve
reasonable testing.
Implementation
A product software implementation method is a systematically
structured approach to effectively integrate a software based service
or component into the workflow of an organizational structure or an
individual end-user.
This entry focuses on the process modeling (Process Modeling) side of
the implementation of “large” (explained in complexity differences)
product software, using the implementation of Enterprise Resource
Planning systems as the main example to elaborate on.
Overview
A product software implementation method is a blueprint to get users
and/or organizations running with a specific software product. The
method is a set of rules and views to cope with the most common
issues that occur when implementing a software product: business
alignment from the organizational view and acceptance from the
human view.
The implementation of product software, as the final link in the
deployment chain of software production, is in a financial perspective
of a major issue. It is stated that the implementation of (product)
software consumes up to 1/3 of the budget of a software purchase
Implementation complexity differences
The complexity of implementing product software differs on several
issues. Examples are: the number of end users that will use the
product software, the effects that the implementation has on changes
of tasks and responsibilities for the end user, the culture and the
integrity of the organization where the software is going to be used
and the budget available for acquiring product software.
In general, differences are identified on a scale of size (bigger, smaller,
more, less). An example of the “smaller” product software is the
implementation of an office package. However there could be a lot of
end users in an organization, the impact on the tasks and
responsibilities of the end users will not be too intense, as the daily
workflow of the end user is not changing significantly. An example of
“larger” product software is the implementation of an Enterprise
Resource Planning system. The implementation requires in-depth
insights on the architecture of the organization as well as of the
product itself, before it can be aligned. Next, the usage of an ERP
system involves much more dedication of the end users as new tasks
and responsibilities will never be created or will be shifted.
Software customization and Business Process Redesign
Process modeling, used to align product software and organizational
structures, involves a major issue, when the conclusion is drawn that
the product software and the organizational structure do not align well
enough for the software to be implemented. In this case, two
alternatives are possible: the customization of the software or the
redesign of the organizational structure, thus the business processes.
Implementation Frameworks
The guiding principle versus the profession
Another issue on the implementation process of product software is the
choice, or actually the question, to what extent an implementation
method should be used.
Implementation methods can on the one hand be used as a guiding
principle, indicating that the method serves as a global idea about how
the implementation phase of any project should run. This choice leaves
more room for situational factors that are not taken into account in the
chosen method, but will result in ambiguity when questions arise in the
execution of the implementation process.
On the other hand methods can be used as a profession, meaning that
the method should be taken strict and the usage of the method should
be a profession, instead of a guiding principle. This view is very useful
if the implementation process is very complex and is very dependent
on exact and precise acting. Organizational and quality management
will embrace this view, as a strict usage of any method results in more
clarity on organizational level. Change management however might
indicate that more flexibility in an implementation method leaves more
room for the soft side of implementation processes.
Implementation frameworks
Apart from implementation methods serving as the set of rules to
implement a specific product or service, implementation frameworks
serve as the project managed structure to define the implementation
phase in time, budget and quality.
Several project management methods can serve as a basis to perform
the implementation method. Since this entry focuses on the
implementation of product software, the best project management
methods suitable for supporting the implementation phase are project
management methods that focus on software and information systems
itself as well. The applicability of using a framework for implementation
methods is clarified by the examples of using DSDM and Prince2 as
project management method frameworks.rony
DSDM
The power of DSDM is that the method uses the principles of iteration
and incremental value, meaning that projects are carried out in
repeating phases where each phase adds value to the project. In this
way implementation phases can be carried out incrementally, adding
value to for example the degree of acceptance, awareness and skills
within every increment [F. Von Meyenfeldt, Basiskennis
projectmanagement, Academic Service 1999]. Besides in the
management of chance scope, increments are also usable in the
process modeling scope of implementation phases. Using increments
can align process models of business architectures and product
software as adding more detail in every increment of the phase draws
both models closer. The DSDM also has room for phased training,
documentation and reviewing.
The image below illustrates how implementation phases are supported
by the usage of DSDM, focusing on management of change, process
modeling and support.
Prince2
As DSDM does, the Prince2 method acknowledges implementation as a
phase within the method. Prince2 consists of a set of processes, of
which 3 processes are especially meant for implementation. The
processes of controlling a stage, managing product delivery and
managing stage boundaries enable an implementation process to be
detailed in with factors as time and quality. The Prince2 method can be
carried out iteratively but is also suitable for a straight execution of the
processes.
The profits for any implementation process being framed in a project
management framework are:
Clarity
An implementation framework offers the process to be detailed in with
factors such as time, quality, budget and feasibility.
Iterative, incremental approach
As explained, the possibility to execute different phases of the
implementation process iteratively enables the process to be executed
by incrementally aligning the product to be implemented with the end-
user (organization).
Assessments
Using an embedded method brings the power that the method is
designed to implement the software product that the method comes
with. This suggests a less complicated usage of the method and more
support possibilities. The negative aspect of an embedded method
obviously is that it can only be used for specific product software.
Engineers and consultants, operating with several software products,
could have more use of a general method, to have just one way of
working.
Using a generic method like ERP modeling has the power that the
method can be used for several ERP systems. Unlike embedded
methods, the usage of generic methods enables engineers and
consultants that operate in a company where several ERP systems are
implemented in customer organizations, to adapt to one specific
working method, instead of having to acquire skills for several
embedded models. Generic methods have however the lack that
implementation projects could become too situational, resulting in
difficulties and complexity in the execution of the modeling process, as
less support will be available.
Managing project delivery is essential to avoid the common problems
of the software solution not working as expected or crashing out due to
multiple users accessing the system at the same time. The keys to
project delivery are: successful implementation of the software,
managing the business change and scaling up the business use
quickly.
Successful Implementation
Successful implementation of the software must be planned carefully.
In short there are two key options for delivering the software -- big
bang or phased release:
A "big bang" deployment or release software to all users at the same
time
Phased deployment or release software to users over a period of time
for example by department or by geographical location. The project
needs to make a considered decision on the best way to release a
software solution to the business. Business will often choose a phased
deployment, consequently reducing project risk because if there is
some problem the business impact is reduced. In addition, the project
deployment of software includes:
Cleanup of the "test" environment following successful completion of
testing
Preparation of project deployment to the business such as setting up
user accounts to access the system and ensuring any lists of values
have valid values Deploying the software to the "production"
environment ready for normal business use. Plan and mechanism to
back out of production software deployment if the process goes wrong
for some unexpected reason, restoring the business to its pre-
deployment state. Some of these ideas have developed from IT
Service Management and its discipline of Release Management - for
more background read: Release Management: Where to Start? Project
management should borrow and evolve good ideas whenever needed.
Managing the Business Change of Project Delivery
Project deployment of the software to the business units such that they
are able to use it from a specified date/time is not enough by itself.
Managing the business change is an essential part of project delivery
and that needs to include:
Building awareness within the business of the software solution
through communication
Developing business support and momentum to use the solution
through stakeholder engagement Planning and executing the training
plan for business users and administrators
Business plan to exploit the use of the solution and to scale up the
numbers of users Setting up and operating a customer board to
manage the evolution of the solution
Software Maintenance
Software maintenance: Software maintenance in software
engineering is the modification of a software product after delivery to correct
faults, to improve performance or other attributes, or to adapt the product to
a modified environment
Overview
The problem and modification analysis process, which is executed once the
application has become the responsibility of the maintenance group. The
maintenance programmer must analyze each request, confirm it (by
reproducing the situation) and check its validity, investigate it and propose a
solution, document the request and the solution proposal, and, finally, obtain
all the required authorizations to apply the modifications.
There are a number of processes, activities and practices that are unique to
maintainers, for example:
The key software maintenance issues are both managerial and technical. Key
management issues are: alignment with customer priorities, staffing, which
organization does maintenance, estimating costs. Key technical issues are:
limited understanding, impact analysis, testing, maintainability
measurement.
During the process of repacking the pump, there are a number of simple
inspections related to packing life which can be performed. These include:
SOFTWARE MAINTENANCE
Preventive Maintenance
Preventive maintenance is a schedule of planned maintenance actions aimed at the prevention of
breakdowns and failures. The primary goal of preventive maintenance is to prevent the failure of
equipment before it actually occurs. It is designed to preserve and enhance equipment reliability
by replacing worn components before they actually fail. Preventive maintenance activities include
equipment checks, partial or complete overhauls at specified periods, oil changes, lubrication and
so on. In addition, workers can record equipment deterioration so they know to replace or repair
worn parts before they cause system failure. Recent technological advances in tools for
inspection and diagnosis have enabled even more accurate and effective equipment
maintenance. The ideal preventive maintenance program would prevent all equipment failure
before it occurs.
Conclusion
Conclusion:
dialog.
effort.
mechanism.
BIBILOGRAPHY:
JAVA:
JAVA Complete reference by Subramanaim Allamaraju,
Cedric Buest
JAVA Script Programming by Yehuda Shiram
J2EE by Shadab Siddiqui
JAVA Server Pages by Larne Pekowsely
JAVA Server Pages by Nice Todd