Sunteți pe pagina 1din 85

INTRODUCTION

INTRODUCTION
This software is a prototype model and will be provided as a tool to the various banks of India. The Bank has been working for account information withdraw (through cash/cheque) deposit amount. The objective of single window based banking system is to prepare a software or application, which could maintain data & provide a user friendly interface for retrieving customer related details just in few second with 100% accurate. Software is completely computerized. So, it is no time consuming process, no paper work required & can be implemented further. The application should all facilitate the addition of new customer A/C, deletion of A/C, modifying & existing of A/C. The transaction & any A/C should be opened with minimum rest Rs.500 etc. The single window facilitates dealing like withdrawing cash from a saving A/C, current A/C, and purchase of draft or pay orders, making fixed deposit of all banking system at a single counter that is all customer needs are attended to at a single point of delivery. Earlier, for this type of transaction, a customer had to approach different staff at different counter. To use this software all the facilities availed at of one counter or window.

PURPOSE

PURPOSE
This application software facilitates its user to let him access or can create account whenever required to do so. Administrator has a privilege to update, modify and generate transaction table, customer table, User Table. User information, login details, employee details are stored accordingly. This application software reduces the paper job and let it users to perform all tasks in automatic manner. User friendly and easily configurable.

OBJECTIVE

OBJECTIVE
In our project, we are developing software for solving financial applications of a customer in banking environment in order to nurture the needs of an end banking user by providing various ways to perform banking tasks. Also to enable the users workspace to have additional functionalities which are not provided under conventional banking software. To allow only authorized user to access various functions and processed available in the system. Locate any account wanted by the employee. Reduced clerical work as most of the work done by computer. Provide greater speed & reduced time consumption.

SYSTEM SPECIFICATION

HARDWARE REQUIREMENT

HARDWARE REQUIREMENT
RAM

Minimum 512 MB
PROCESSOR

Processor 1.8 GHz or higher


HARD DISK

Minimum 5 GB or above
OTHERS

Monitor Keyboard Mouse

SOFTWARE REQUIRMENT

10

SOFTWARE REQUIREMENT
FRONT END

Java 1.6
BACK END
MySQL 5.1

OPERATING SYSTEM

Windows XP or later

11

TECHNOLOGY USED

12

TECHNOLOGY USED
This project is a web based application that is developed in JAVA and having MySQL 5.1 as back end. JAVA 1.6 MySQL 5.1

13

JAVA
Java is a programming language originally developed by James Gosling at Sun Microsystems (which is now a subsidiary of Oracle Corporation) and released in 1995 as a core component of sun Microsystems java platform. The language derives much of its syntax from C and C++ but has a simpler object model and fewer low-level facilities. Java applications are typically compiled to byte code (class file) that can run on any java virtual machine (JVM) regardless of computer architecture. Java is currently one of the most popular programming languages in use, and is widely used from application software to web application. Java is a computer programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies as possible. It is intended to let application developers "write once, run anywhere" (WORA), meaning that code that runs on one platform does not need to be recompiled to run on another. Java applications are typically compiled to byte code (class file) that can run on any Java virtual machine (JVM) regardless of computer architecture.

Advantages of Java

Java is simple, easy to design, easy to write, and therefore easy to compile, debug, and learn than any other programming languages.

14

Java is object-oriented, that is used to build modular programs and reusable code in other application.

Java is platform-independent and flexible in nature. The most significant feature of Java is to run a program easily from one computer system to another.

Java works on distributed environment. It is designed to work on distributed computing, any network programs in Java is same as sending and receiving data to and from a file.

Java is secure. The Java language, compiler, interpreter and runtime environment are securable.

Java is robust. Robust means reliability. Java emphases on checking for possible errors, as Java compilers are able to detect many error problems in program during the execution of respective program code.

Java supports multithreaded. Multithreaded is the path of execution for a program to perform several tasks simultaneously within a program. The Java comes with the concept of Multithreaded Program. In other languages, operating system-specific procedures have to be called in order to work on multithreading.

15

JAVA FEATURES
Platform Independence
o The write-once-run-anywhere ideal has not been achieved (tuning for different platforms usually required), but closer than with other languages.

Object oriented
o Object oriented throughout- no coding outside of class definitions, including main (). o An extensive class library available in the core language packages.

Compiler/interpreter Combination
o Code is complied with byte codes that are interpreted by java virtual machines (JVM). o This provides portability to any machine for which a virtual machine has been written. o The two steps of compilation and interpretation allow for extensive code checking and improve security.

Robust
o Exception handling built-in, strong type checking (that is, all data must be declared an explicit type), local variables must be initialized

Several dangerous features of C &C++ eliminated


o No memory pointers o No pre-processor o Array index limit checking

Automatic memory management

16

o Automatic garbage collection- memory management handled by JVM.

Security
o No memory pointers o Programs run inside the virtual machine sandbox o Array index limit checking o Code pathologies reduced by Byte code verifier checks classes after loading. Class loader confines objects to unique namespaces. Prevents loading a hacked, i.e. java.lang.securitymanager class. Security manager determines what resources a class can access such as reading and writing to the local disk.

Dynamic binding
o The linking of data and methods to where they are located is done at run-time. o New classes can be loaded while a program is running .linking is done on the fly. o Even if libraries are complied, there is no need to recompile code that uses classes in those libraries. This differs from C++, which uses static binding. This can result in fragile classes for cases where linked code is changed and memory pointers then point to the wrong addresses.

17

Good performance
o Interpretation of byte codes slowed performance in early versions, but advanced virtual machines with adaptive and just-in-time compilation and other techniques now typically provide performance up to 50% to 100% the speed of C++ programs.

Threading
o Lightweight processes, called thread, can easily be spun off to perform multiprocessing. o Can take advantages of multiprocessors where available o Great for multimedia displays

18

MySQL
MySQL pronounced either My S-Q-L or My Sequel, is an open source relational database management system. It is based on the structure query language (SQL), which is used for adding, removing, and modifying information in the database. Standard SQL commands, such as ADD, DROP, INSERT, and UPDATE can be used with MySQL. MySQL can be used for a variety of applications, but is most commonly found on web servers. A website that uses MySQL may include web pages that access information from a database. These pages are often referred to as dynamic, meaning the content of each page is generated from a database as the page loads. Websites that use dynamic web pages are often referred to as databasedriven websites. Many database-driven websites that use MySQL also use a web scripting language like PHP to access information from the database. MySQL commands can be incorporated into the PHP code, allowing part or all of a web page to be generated from database from database information. Because both MySQL and PHP are open source (meaning they are free to download and use), the PHP/MySQL combination has become a popular choice for database-driven websites.

ADVANTAGES
MySQL has few advantages over some other DBMSs.

19

1. It is available on many different operating systems. You can use it on one type of system and it will work the same way on a different platform, should you ever decide to use a different OS or platform. Some other DBMSs, such as SQL server, are only available on specific operating systems (although only being available on specific systems may allow it to integrate itself better into that platform and its tools). 2. MySQL is also very widely used. This means if you have a problem, you will usually be able to find a solution on the net or go to a forum and have your question answered relatively quickly. 3. MySQL is also and is open source under the GNU licensing agreement. This means it is cheap and you will often find a lot of other open source tools, which are free, available to use with it (because it is so popular).

DISADVANTAGES
1. MySQL does not support a very large database size as efficiently. 2. MySQL does not support ROLE, COMMIT, and stored procedures in versions less than 5.0 3. Transactions are not handled very efficiently.

20

FEATURES

21

FEATURES
It allows only authorized user to access the whole software. It allows employee of the bank to create accounts instantly. Withdrawal and deposition (of money) tasks is simplified. Auto report generation. User friendly design and easily configurable.

22

FEASIBILITY STUDY

23

FEASIBILITY STUDY
Feasibility study aim to objectively and rationally uncover the strength and weaknesses of the existing business or proposed venture, opportunities and threats as presented by the environment, the resources required to carry through, and ultimately the prospects for success. In its simplest term, the two criteria to judge feasibility are cost required and value to be attained. As such, a well-designed feasibility study should provide a historical background of the business or project, description of the product or service, accounting statements, details of the operations and management, marketing research and policies, financial data, legal requirements and tax obligation. Generally, feasibility studies precede technical development and project implementation. Five common factors (TELOS) Technology and system feasibility The assessment is based on an outline design of system requirements in terms of input, processes, output, fields, programs, and procedures. This can be quantified in terms of volumes of data, trends, frequency of updating, etc. In order to estimate whether the new system will perform adequately or not. Technological feasibility is carried out to determine whether the company has the capability, in terms of software, hardware, personnel and expertise, to handle the completion of the project.

24

Economic feasibility Economic analysis is the most frequently used method for evaluating the effectiveness of a new system. More commonly known as cost/benefit analysis, the procedure is to determine the benefits and savings that are expected from a candidate system and compare them with costs. If benefits outweigh costs, then the decision is made to design and implement the system. An entrepreneur must accurately weigh the cost versus benefits before taking an action. Costbased study Its important to identify cost and benefit factors, which can be categorized as follows: development costs; and operating costs. This is an analysis of the costs to be incurred in the system and the benefits derivable out of the system. Time-based study This is an analysis of the time required to achieve a return on investments. The future value of a project is also a factor. In our project the software which we use for develop our project is totally free either it is java or MySQL. So there is no economically cost. Legal feasibility Determines whether the proposed system conflicts with legal requirements, e.g. a data processing system must comply with the local data protection acts.

25

Since all the software we are using is free, so we are not violating any anti piracy law. Operational feasibility Operational feasibility is a measure of how well proposed system solves the problems, and takes advantages of the opportunities identified during scope definition and how it satisfies the requirements identified in the requirement analysis phase of system development. This software fulfils all the needs of car showroom departmental stores. Schedule feasibility A project will fail if it takes too long to be completed before it is useful. Typically this means estimating how long the system will take to develop, and if it can be completed in a given time period using some methods like payback period. Schedule feasibility is a measure of how reasonable the project timetable is. Given our technical expertise, are the project deadlines reasonable? Some projects are initiated with specific deadlines. You need to determine whether the deadlines are mandatory or desirable. Since, we are using rapid development tools (Net beans). Therefore the software will be easily completed on schedule.

26

WHATARE THE USERS DEMONSTRATABLE NEEDS?


User needs application software, which will provide him ease from security threats, flexibility, fast result accessing according to its requirement.

HOW CAN THE PROBLEM BE REDEFINED?


We proposed our perception of the system, in accordance with the problems of existing system by making a full layout. We were further updating in the layout in the basis of redefined the problems. In feasibility study phase we had undergone through various steps, which are described as under: how feasible is the system proposed? This was analyzed by comparing the following factors: Cost Effort Time Labor

COST
the cost required in the proposed system is comparatively less to the existing system.

EFFORT
27

Compared to the existing system the proposed system will provide a better working environment in which there will be ease of work and the effort required will be comparatively less than the existing system.

TIME
Also the time required generating a report and cache files will be comparatively very less than in the existing system.

LABOUR
Also the time required and doing any other work will comparatively very less. Record updating will take less time. .

28

SDLC

29

SDLC
The systems development life cycle (SDLC), also referred to as the application development life-cycle, is a term used in systems engineering, information systems and software engineering to describe a process for planning, creating, testing, and deploying an information system. The systems development life-cycle concept applies to a range of hardware and software configurations, as a system can be composed of hardware only, software only, or a combination of both. A systems development life cycle is composed of a number of clearly defined and distinct work phases which are used by systems engineers and systems developers to plan for, design, build, test, and deliver information systems. Like anything that is manufactured on an assembly line, an SDLC aims to produce high quality systems that meet or exceed customer expectations, based on customer requirements, by delivering systems which move through each clearly defined phase, within scheduled time-frames and cost estimates. Computer systems are complex and often (especially with the recent rise of service-oriented architecture) link multiple traditional systems potentially supplied by different software vendors. To manage this level of complexity, a number of SDLC models or methodologies have been created, such as "waterfall"; "spiral"; "Agile software development"; "rapid prototyping"; "incremental"; and "synchronize and stabilize. ISO/IEC 12207 is an international standard for software life-cycle processes.It aims to be the standard that defines all the tasks required for developing and maintaining software.
30

SOFTWARE DEVELOPMENT ACTIVITIES


Planning Implementation, testing and documentation Deployment and maintenance

PLANNING
Planning is an objective of each and every activity, where we want to discover things that belong to the project. An important task in creating a software program is extracting the requirements or requirements analysis. Customers typically have an abstract idea of what they want as an end result, but do not know what software should do. Skilled and experienced software engineers recognize incomplete, ambiguous, or even contradictory requirements at this point. Frequently demonstrating live code may help reduce the risk that the requirements are incorrect. Once the general requirements are gathered from the client, an analysis of the scope of the development should be determined and clearly stated. This is often called a scope document. Certain functionality may be out of scope of the project as a function of cost or as a result of unclear requirements at the start of development. If the development is done externally, this document can be considered a legal document so that if there
31

are ever disputes, any ambiguity of what was promised to the client can be clarified.

IMPLEMENTATION, TESTING, DOCUMENTATION


Implementation is the part of the process where software engineers actually program the code for the project. Software testing is an integral and important phase of the software development process. This part of the process ensures that defects are recognized as soon as possible. Documenting the internal design of software for the purpose of future maintenance and enhancement is done throughout development. This may also include the writing of an API, be it external or internal. The software engineering process chosen by the developing team will determine how much internal documentation (if any) is necessary. Plan-driven models (e.g., Waterfall) generally produce more documentation than agile models.

DEPLOYMENT AND MAINTENANCE


Deployment starts directly after the code is appropriately tested, approved for release, and sold or otherwise distributed into a production environment. This may involve installation, customization (such as by setting parameters to the customer's values), testing, and possibly an extended period of evaluation. Software training and support is important, as software is only effective if it is used correctly.
32

Maintaining and enhancing software to cope with newly discovered faults or requirements can take substantial time and effort, as missed requirements may force redesign of the software.

33

WATERFALL MODEL

34

WATERFALL MODEL
The waterfall model is a sequential design process, often used in software development processes, in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Conception, Initiation, Analysis, Design, Construction, Testing, Production/Implementation, and Maintenance. The waterfall development model originates in the manufacturing and construction industries; highly structured physical environments in which after-the-fact changes are prohibitively costly, if not impossible. Since no formal software development methodologies existed at the time, this hardware-oriented model was simply adapted for software development. The first known presentation describing use of similar phases in software engineering was held by Herbert D. Benington at Symposium on advanced programming methods for digital computers on 29 June 1956. This presentation was about the development of software for SAGE. In 1983 the paper was republished with a foreword by Benington pointing out that the process was not in fact performed in a strict top-down fashion, but depended on a prototype. The first formal description of the waterfall model is often cited as a 1970 article by Winston W. Royce, although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model. This, in fact, is how the term is generally used in writing about software developmentto describe a critical view of a commonly used software development practice.

35

WATERFALL MODEL

36

PHASES OF WATERFALL MODEL


Feasibility study Requirement Analysis and specification Design Implementation and unit testing Integration and system testing Operation and maintenance

FEASIBILITY STUDY
The feasibility study activity involves the analysis of the problem and collection of the relevant information to the product. The main aim of the feasibility study is to determine whether it would be financially and technically feasible to develop the product.

REQUIREMENT AND SPECIFICATION


The goal of this phase is to understand the exact requirements of the customer and to document them properly. This activity is usually executed together with the customer as the goal is to document all functions performance and interfacing requirements for the software. The requirements describes the what of a system not the how. This phase produces a large document written in a natural language. The resultant document is known as SRS document.

37

DESIGN
The goal of this phase is to transform the requirement specification into a structure that is suitable for implementation in some programming language. In this activity, the functional specifications are use for translating the model into a design of the desired system. It includes defining of the module and their relationship to one another called a structure chart.

IMPLEMENTATION AND UNIT TESTING


During this phase the design is implemented. Initially small modules are tested in isolation from rest of the software product. This is the process in which the develop system is handed over to the client. Unit testing is that testing of each individual module. The purpose of unit testing determines the correct working of the individual modules. Unit testing involves a precise definition of the test cases. Every company formulates its own coding standards such as layout of programs, content and formats of the headers etc.

INTEGRATION AND SYSTEM TESTING


This is very important phase effective testing will contribute to the delivery of higher quality software products, more satisfied users, lower maintenance cost and more accurate reliable results. During this phases individual program units or programs are integrated and tested as complete system to ensure that the software requirement have been met.

38

OPEARATION & MAINTENANCE


Release of software inaugurates the operation and life cycle phase of the operation. Software maintenance is a task that every development group has to face, where the software is delivered to the customers site installed and is made operational. Therefore, release of software inaugurates the operation and maintenance phase of the life cycle. The time spent and effort required keeping the software operational after release includes error correction enhancement of obsolete capabilities.

39

ADVANTAGES

40

ADVANTAGES
It replaces the traditional way of banking by shifting to it on more computational side. This software will eliminate the counter problem i.e., customer dont have to think which counter he have to approach for cash withdrawal or cash deposition. It will reduce the time consumed in one transaction. Employees productivity time increases at workplace. It can be used for keeping log i.e. user record, employee record etc.

41

DISADVANTAGES

42

DISADVANTAGES

Since banks are wholly dependent on software, and we all know at any time software can crash, this can lead to chaos.

Since there is no level of security, anyone can breach the software easily.

43

LIMITATION

44

LIMITATION
The new system has been designed to meet all of the user requirements but it too has certain limitations. This project has an assumption that our project is used by only employee and Administrator. Those are registered to our program. In our particular program employee only view and the administrator is updating all data. Some of which can be enhanced in the future enhancements or updates:

NO ONLINE BANKING SUPPORT


The existing system supports direct interaction between customers and employee of the bank. Customers cant access its account by sitting at his home.

MULTI-BRANCH NOT SUPPORTED


Currently this application only support single branch of bank. In future enhancement this problem will be faded out.

45

FUTURE ENHANCEMENTS

46

FUTURE ENHANCEMENTS
Enhancements are the perquisite for development of a system. Every existing system has proposed enhancements which make it better and easier to use and more secure. The enhancements that have been proposed for this system are listed here:

FACILITY FOR MULTI-BRANCHES


Along with that, we will enhance this application software from single branch system to multi-branch system. It means multiple branches can use it at a time and our system is scalable enough to handle multi request.

ONLINE BANKING SUPPORT FOR USERS


The new system will allow it user to access its own account from any corner, at any time. User will just need a User ID and password to maintain its account, can demands for various services available by the bank.

47

DATA FLOW DIAGRAM

48

DATA FLOW DIAGRAM


A data flow diagram (DFD) is a graphical representation of the "flow" of data through an information system, modeling its process aspects. Often they are a preliminary step used to create an overview of the system which can later be elaborated. DFDs can also be used for the visualization of data processing

(structured design). A DFD shows what kinds of information will be input to and output from the system, where the data will come from and go to, and where the data will be stored. It does not show information about the timing of processes, or information about whether processes will operate in sequence or in parallel (which is shown on a flowchart). DFDs are an important technique for modeling a systems high level detail by showing how input data is transformed to output results through a sequence of functional transformations. DFDs consist of four major components: entities, processes, data stores, and data flows. The symbols used to depict how these components interact in a system are simple and easy to understand; however, there are DFD models to work from, each having its own symbology. DFD syntax does remain constant by using simple verb and noun constructs. Such a syntactical relationship of DFDs makes them ideal for object-oriented analysis and parsing functional specifications into precise DFDs for the systems analyst.

49

When it comes to conveying how information data flows through systems (and how the data is transformed in the process), data flow diagrams (DFDs) are the method of choice over technical descriptions for three principle reasons. DFDs are easier to understand by technical and non technical audiences. DFDs can provide a high level system overview, complete with boundaries and connections to other systems. DFDs can provide a detail representation of system components. DFDs help system designers and others during initial analysis stages visualize a current system or one that may be necessary to meet new requirements. Systems analysts prefer working with DFDs, particularly when they require a clear understanding of the boundary between existing systems and postulated systems. DFDs represent the following: External devices sending and receiving data Processes that change that data Data flows themselves Data storage locations

50

The hierarchical DFD typically consists of a top-level diagram (Level 0) underlain by cascading lower level diagrams (Level 1, Level 2) that represent different parts of the system.

VALID AND NON-VALID DATA FLOWS


Before embarking on developing your own data flow diagram, there are some general guidelines you should be aware of. Data stores are storage areas and are static or passive; therefore, having data flow directly from one data store to another doesnt make sense because neither initiates the communication. Data stores maintain data in an internal format, while entities represent people or systems external to them. Because data from entities would be difficult because it would be impossible for the system to know about any communication between them. The only type of communication that can be modeled is that which the system is expected to know or react to. Processes on DFDs have no memory, so it would not make sense to show data flows between two asynchronous processes (between two processes that may or may not be active simultaneously) because they may respond to different external events. Therefore, data flow should only occur in the following scenarios: Between a process and an entity (in either direction) Between a process and a data store (in either direction) Between two processes that can run simultaneously
51

Data flow diagramming is a highly effective technique for showing the flow of information through a system. DFDs are used in the preliminary stages of systems analysis to help understand the current system and to represent a required system. The DFDs themselves represent external entities sending and receiving information (entities), the processes that change information (processes), the information flows themselves (data flows), and where information is stored (data stores). The hierarchical DFDs consist of a single top layer (Level 0 or the context diagram) that can be decomposed into many lower level diagrams (Level 1, Level 2Level N), each representing different areas of the system.

DFDs are extremely useful in systems analysis as they help structure, the steps in object-oriented design and analysis. Because DFDs and object technology share the same syntax constructs, DFDs are appropriate for the OO domain only. DFDs are a form of information development, and as such provide key insight into how information is transformed as it passes through a system. Having the skills to develop DFDs from functional specs and being able to interpret them is a valueadd skill set that is well within the domain of technical communications.

DFD NOTATIONS
The DFD may be partitioned into levels that represent increasing formation flow and functional details. Four simple notations are used to complete a DFD.
52

These notations are given below: Data flow Process External Entity Data Store Output

DATA FLOW
The data flow is used to describe the movement of information from one part of the system to another part. Flows represent data in motion. It is a pipe line through which information flows.

PROCESS
A circle or bubble represents a process that transforms incoming data to outgoing data. Process shows a part of the system that transforms inputs to outputs.

53

EXTERNAL ENTITY
A square defines a source or destination of system data. External entities represent any entity that supplies or receive information from the system but is not a part of the system.

DATA FLOW
The data store represents a logical file. A logical file can represent either a data store symbol which can represent either a data structure or a physical file on disk. The data store is used to collect data at rest pr a temporary repository of data. It is represented by open rectangle.

54

OUTPUT
The output symbol is used when a hard copy is produced and the user of the copies cannot be clearly specified or there are several users of the output.

55

0-LEVEL DFD

56

0-LEVEL DFD

57

1-LEVEL DFD

58

1-LEVEL DFD

59

2-LEVEL DFD

60

2-LEVEL DFD

61

DATABASE DESIGN

62

DATABASE DESIGN
Database design is the process of producing a detailed data model of a database. This logical data model contains all the needed logical and physical design choices and physical storage parameters needed to generate a design in a Data Definition Language, which can then be used to create a database. A fully attributed data model contains detailed attributes for each entity. The term database design can be used to describe many different parts of the design of an overall database system. Principally, and most correctly, it can be thought of as the logical design of the base data structures used to store the data. In the relational model these are the tables and views. In an object database the entities and relationships map directly to object classes and named relationships. However, the term database design could also be used to apply to the overall process of designing, not just the base data structures, but also the forms and queries used as part of the overall database application within the database management system (DBMS). The process of doing database design generally consists of a number of steps which will be carried out by the database designer. Usually, the designer must:

Determine the relationships between the different data elements. Superimpose a logical structure upon the data on the basis of these relationships.

63

THE DESIGN PROCESS


1. Determine the purpose of the database This helps prepare for the remaining steps. 2. Find and organize the information required Gather all of the types of information to record in the database, such as product name and order number.

3. Divide the information into tables Divide information items into major entities or subjects, such as Products or Orders. Each subject then becomes a table.

4. Turn information items into columns Decide what information needs to be stored in each table. Each item becomes a field, and is displayed as a column in the table.

5. Specify primary keys Choose each tables primary key. The primary key is a column, or a set of columns, that is used to uniquely identify each row. An example might be User ID or Registration ID.

6. Set up the table relationships Look at each table and decide how the data in one table is related to the data in other tables. Add fields to tables or create new tables to clarify the relationships, as necessary.
64

7. Refine the design Analyze the design for errors. Create tables and add a few records of sample data. Check if results come from the tables as expected. Make adjustments to the design, as needed.

8. Apply the normalization rules Apply the data normalization rules to see if tables are structured correctly. Make adjustments to the tables.

DATABASE FORMAT
A properly designed database provides you with access to up-to-date, accurate information. Because a correct design is essential to achieving your goals in working with a database, investing the time required to learn the principles of good design makes sense. In the end, you are much more likely to end up with a database that meets your needs and can easily accommodate change. This article provides guidelines for planning a database. You will learn how to decide what information you need, how to divide that information into the appropriate tables and columns, and how those tables relate to each other. In this project, we have databases wherever it is required. The databases are required for the entities (as shown in the DFD diagram) present in the project. The entities are: USER TABLE
65

ADMIN TABLE CUSTOMER TABLE TRANSCATION TABLE

USER TABLE
The required fields are: UserID Pwd User name EmpID

Field UserID Pwd User Name EmpID

Type varchar varchar varchar varchar

Size(Bytes) 15 20 30 30

Description Primary Key Password User Name Employee ID

66

ADMIN TABLE
The required fields are: AdminID Password

Field AdminID Pwd

Type varchar varchar

Size(Bytes) 15 20

Description Primary Key Password

CUSTOMER TABLE
The required fields are: Name PName Address Gender MobNo. A/C Photo
67

Sign AType Nname

Field Name PName Address Gender MobNo. A/C

Type varchar varchar varchar varchar number int

Size(Bytes) 30 30 50 6 10 12

Description Name(Customer) Parent Name Address Gender(Customer) Mobile Number Account No. (Primary Key)

Photo Sign AType Nname

mediumblob mediumblob varchar varchar

1 1 10 30

Photo Signature Account Type Nominee Name

68

TRANSACTION TABLE
The required fields are: A/C No. Name Amount Ttype Field A/C No. Name Amount Ttype Type int varchar number varchar Size(Bytes) 12 30 15 10 Description Account No. Name(Customer) Amount Transaction Type

69

E-R DIAGRAM

70

E-R DIAGRAM
EntityRelationship model (ER model) in software engineering is a data model for describing a database in an abstract way. This article refers to the techniques proposed in Peter Chen's 1976 paper. However, variants of the idea existed previously, and have been devised subsequently such as super type and subtype data entities and commonality relationships. An ER model is an abstract way of describing a database. In the case of a relational database, which stores data in tables, some of the data in these tables point to data in other tables - for instance, your entry in the database could point to several entries for each of the phone numbers that are yours. The ER model would say that you are an entity, and each phone number is an entity, and the relationship between you and the phone numbers is 'has a phone number'. Diagrams created to design these entities and relationships are called entityrelationship diagrams or ER diagrams. Using the three schema approach to software engineering, there are three levels of ER models that may be developed.

71

LEVELS OF E-R MODEL


THE CONCEPTUAL DATA MODEL:
This is the highest level ER model in that it contains the least granular detail but establishes the overall scope of what is to be included within the model set. The conceptual ER model normally defines master reference data entities that are commonly used by the organization. Developing an enterprise-wide conceptual ER model is useful to support documenting the data architecture for an organization. A conceptual ER model may be used as the foundation for one or more logical data models. The purpose of the conceptual ER model is then to establish structural metadata commonality for the master data entities between the set of logical ER models. The conceptual data model may be used to form commonality relationships between ER models as a basis for data model integration.

THE LOGICAL DATA MODEL:


A logical ER model does not require a conceptual ER model, especially if the scope of the logical ER model includes only the development of a distinct information system. The logical ER model contains more detail than the conceptual ER model.

72

In addition to master data entities, operational and transactional data entities are now defined. The details of each data entity are developed and the entity relationships between these data entities are established. The logical ER model is however developed independent of technology into which it will be implemented. An entity may be a physical object such as a house or a car, an event such as a house sale or a car service, or a concept such as a customer transaction or order. Although the term entity is the one most commonly used, following Chen we should really distinguish between an entity and an entity-type. An entity-type is a category. Logical data models represent the abstract structure of a domain of information. They are often diagrammatic in nature and are most typically used in business processes that seek to capture things of importance to an organization and how they relate to one another. Once validated and approved, the logical data model can become the basis of a physical data model and inform the design of a database. An entity, strictly speaking, is an instance of a given entity-type. There are usually many instances of an entity-type. Because the term entity-type is somewhat cumbersome, most people tend to use the term entity as a synonym for this term. Entities can be thought of as nouns. Examples: a computer, an employee, a song, a mathematical theorem. A relationship captures how entities are related to one another. Relationships can be thought of as verbs, linking two or more nouns.

73

THE PHYSICAL MODEL:


One or more physical ER models may be developed from each logical ER model. The physical ER model is normally developed to be instantiated as a database. Therefore, each physical ER model must contain enough detail to produce a database and each physical ER model is technology dependent since each database management system is somewhat different. The physical model is normally forward engineered to instantiate the structural metadata into a database management system as relational database objects such as database tables, database indexes such as unique key indexes, and database constraints such as a foreign key constraint or a commonality constraint. The ER model is also normally used to design modifications to the relational database objects and to maintain the structural metadata of the database. The first stage of information system design uses these models during the requirements analysis to describe information needs or the type of information that is to be stored in a database. The data modeling technique can be used to describe any ontology (i.e. an overview and classifications of used terms and their relationships) for a certain area of interest. In the case of the design of an information system that is based on a database, the conceptual data model is, at a later stage (usually called logical design), mapped to a logical data model, such as the relational model; this in turn is mapped to a physical model during physical design. Note that sometimes, both of these phases are referred to as "physical design".

74

E-R MODELLING
The building blocks are: Entities Attributes Relationships

ENTITIES
In a database model, each object that you wish to track in the database is known as an entity. Normally, each entity is stored in a database table and every instance of an entity corresponds to a row in that table. In an ER diagram, each entity is depicted as a rectangular box with the name of the entity contained within it. For example, a database containing information about individual people would likely have an entity called Person. This would correspond to a table with the same name in the database and every person tracked in the database would be an instance of that Person entity and have a corresponding row in the Person table. Database designers creating an E-R diagram would draw the Person entity using a shape similar to this:

Person
They would then repeat the process to create a rectangular box for each entity in the data model.
75

Heres a depiction of the Person entity in that format:

Person PK PersonID FirstName LastName BirthDate

That covers the Entity part of Entity-Relationship diagrams. Now lets take a look at displaying data relationships.

RELATIONSHIP
The power of the E-R diagram lies in its ability to accurately display information about the relationships between entities. For example, we might track information in our database about the city where each person lives. Information about the city itself is tracked within a City entity and a relationship is used to tie together Person and City instances. Relationships are normally given names that are verbs, while attributes and entities are named after nouns. This convention makes it easy to express relationships. For example, if we name our Person/City relationship Lives In, we can string them together to say A person lives in a city. We express relationships in E -R diagrams by drawing a line between the related entities and placing a diamond
76

shape that contains the relationship name in the middle of the line. Heres how our Person/City relationship would look: In data modeling, collections of data elements are grouped into data tables. The data tables contain groups of data field names (known in the science world as database attributes). Tables are linked by key fields. A primary key assigns that fields special order to a table: for example, the DoctorLastName field might be assigned as the primary key of the Doctor table (#correction: PK is supposed to be unique. People can have same last name. may be introduce a new field called DoctorID). A table can also have a foreign key which indicates that field is linked to the primary key of another table. A complex data model can involve hundreds of related tables. A renowned computer scientist, C.J. Date, created a systematic method to organize database models. Dates steps for organizing database tables and their keys are called Database Normalization. Database normalization avoids certain hidden database design errors (delete anomalies or update anomalies). In real life the process of database normalization ends up breaking tables into a larger number of smaller tables, so there are common sense data modeling tactics called de-normalization which combine tables in practical ways. In real world data models careful design is critical because as the data grows voluminous, tables linked by keys must be used to speed up programmed retrieval of data. If data modeling is poor, even a computer applications system with just a million records will give the end-users unacceptable response time delays. For this reason data modeling is a keystone in the skills needed by a modern software developer.
77

ATTRIBUTES
Of course, tracking entities alone is not sufficient to develop a data model. Databases contain information about each entity. This information is tracked in individual fields known as attributes, which normally correspond to the columns of a database table. For example, the Person entity might have attributes correspondin g to the persons first and last name, date of birth, and a unique person identifier. Each of these attributes in an E-R diagram as an oval, as shown in the figure below:

FirstName PersonID

LastName BirthDate

PERSON

Notice that the text in the attribute ovals is formatted slightly differently. There are two text formatting features used to convey additional information about an entitys attributes. First, some fields are displayed in a boldface font. These are required fields, similar to the NOT NULL database constraint. Each instance of an entity must contain information in the FirstName, LastName and PersonID attributes. Also, one attribute is underlined, indicating that it serves as the databases primary key.
78

In this example, personID serves as the primary key. Using this format can be somewhat cumbersome in a diagram containing information about entities with many attributes. Therefore, many database designers prefer to use an alternate format that lists an entitys attributes in tabular form under the name of the entity.

CARDINALITY IN E-R MODEL


In data modeling, the cardinality of one data table with respect to another data table is a critical aspect of database design. Relationships between data tables define cardinality when explaining how each table links to another. In the relational model, tables can be related as any of: many-to-many, many-to-one (rev. one-to-many), or one-to-one. This is said to be the cardinality of a given table in relation to another. For example, consider a database designed to keep track of college records. Such a database could have many tables like: A Teacher table full of teacher information A Student table with student information And a Department table with an entry for each department of the college. In that model: There is a many-to-many relationship between the records in the teachers table and records in the student table (teacher have many students, and a student could have several teachers); A one-to-many relation between the department table and the teacher table (each teacher work for one department, but one department could have many teachers). One-to-one relationship is mostly used to split a table in two in
79

order to optimize access or limit the visibility of some information. In the college example, such a relationship could be used to keep apart teachers personal or administrative information.

80

E-R DIAGRAM

81

CONCLUSION

82

CONCLUSION
This application software (Single Window Based Banking System) facilitates its user to create accounts of its customers along with several banking transactions. This project helps users in several ways i.e. by allowing money to deposit, and withdrawal facility in easy manners and several such features. Administration has a privilege to create, modify and delete the log history and various contents available in cache server. User can create, login and operate the software safely. The development of software includes so many people like user system developer, user of system and the management, it is important to identify the system requirements by properly collecting required data to interact with user of the system. Proper design builds upon this foundation to give a blue print, which is actually implemented by the developers. On realizing the importance of systematic documentation all the processes are implemented using a software engineering approach. Working in a live environment enables one to appreciate the intricacies involved in the System Development Life Cycle (SDLC). We have gained a lot of practical knowledge from this project, which we think, shall make us stand in a good state in thefuture.

83

BIBLIOGRAPHY

84

BIBLIOGRAPHY
WEBSITE REFERENCE
http://www.wikipedia.com http://www.codeproject.com http://www.logicatwork.com

BOOK REFERENCE
Complete reference in JAVA (Herbert Shield) Complete reference MySQL (Vikram Vaswani)

85

S-ar putea să vă placă și