Sunteți pe pagina 1din 8

Rajesh Kumar

Faculty, MCA Course

Bihar University, Muzaffarpur

A database administrator (DBA) is a person who is responsible for the environmental


aspects of a database. The role of a database administrator has changed according to the
technology of database management systems (DBMSs) as well as the needs of the owners of
the databases. For example, although logical and physical database design are traditionally
the duties of a database analyst or database designer, a DBA may be tasked to perform
those duties.

The duties of a database administrator vary and depend on the job description,
corporate and Information Technology (IT) policies and the technical features and
capabilities of the DBMS being administered. They nearly always include disaster recovery
(backups and testing of backups), performance analysis and tuning, data dictionary
maintenance, and some database design.

Some of the roles of the DBA may include

• Installation of new software — It is primarily the job of the DBA to install new
versions of DBMS software, application software, and other software related to
DBMS administration. It is important that the DBA or other IS staff members test this
new software before it is moved into a production environment.

• Configuration of hardware and software with the system administrator — In many


cases the system software can only be accessed by the system administrator. In this
case, the DBA must work closely with the system administrator to perform software
installations, and to configure hardware and software so that it functions optimally
with the DBMS.

• Security administration — Ones of the main duties of the DBA is to monitor and
administer DBMS security. This involves adding and removing users, administering
quotas, auditing, and checking for security problems.

• Data analysis — The DBA will frequently be called on to analyze the data stored in
the database and to make recommendations relating to performance and efficiency of
that data storage. This might relate to the more effective use of indexes, enabling
"Parallel Query" execution, or other DBMS specific features.

• Database design (preliminary) — The DBA is often involved at the preliminary


database-design stages. Through the involvement of the DBA, many problems that
might occur can be eliminated. The DBA knows the DBMS and system, can point out
potential problems, and can help the development team with special performance
considerations.

• Data modeling and optimization — By modeling the data, it is possible to optimize


the system layouts to take the most advantage of the I/O subsystem.
• Responsible for the administration of existing enterprise databases and the analysis,
design, and creation of new databases.
o Data modeling, database optimization, understanding and implementation of
schemas, and the ability to interpret and write complex SQL queries
o Proactively monitor systems for optimum performance and capacity
constraints
o Establish standards and best practices for SQL
o Interact with and coach developers in Structured mark up language scripting

Database Manager:
A database manager, also referred to as a database administrator, is responsible for
working with database management systems software in order to determine the best possible
way to organize and to store data. In order to properly perform this duty, a database manager
must identify the requirements of the user, create a computer database, and test the
modifications made to the database systems,

The database manager must also monitor the system in order to guarantee proper
performance. To guarantee the proper performance, the database manager needs to
understand the platform used to run the database and must be able to add new users to the
system. As such, the database manager is often also responsible for designing and
implementing system security and other security measures.

Three Level Database Architecture


Data and Related Structures

Data are actually stored as bits, or numbers and strings, but it is difficult to work with data at
this level.

It is necessary to view data at different levels of abstraction.

Schema:

• Description of data at some level. Each level has its own schema.

We will be concerned with three forms of schemas:

• physical,
• conceptual, and
• external.

Physical Data Level

The physical schema describes details of how data is stored: files, indices, etc. on the
random access disk system. It also typically describes the record layout of files and type of
files (hash, b-tree, flat).

Early applications worked at this level - explicitly dealt with details. E.g., minimizing
physical distances between related data and organizing the data structures within the file
(blocked records, linked lists of blocks, etc.)

Problem:

• Routines are hardcoded to deal with physical representation.


• Changes to data structures are difficult to make.
• Application code becomes complex since it must deal with details.
• Rapid implementation of new features very difficult.

Conceptual Data Level

Hides details of the physical level.

• In the relational model, the conceptual schema presents data as a set of tables.

The DBMS maps data access between the conceptual to physical schemas automatically.

• Physical schema can be changed without changing application:


• DBMS must change mapping from conceptual to physical.
• Referred to as physical data independence.

External Data Level


In the relational model, the external schema also presents data as a set of relations. An
external schema specifies a view of the data in terms of the conceptual level. It is tailored to
the needs of a particular category of users. Portions of stored data should not be seen by some
users and begins to implement a level of security and simplifies the view for these users

Examples:

• Students should not see faculty salaries.


• Faculty should not see billing or payment data.

Information that can be derived from stored data might be viewed as if it were stored.

• GPA not stored, calculated when needed.

Applications are written in terms of an external schema. The external view is computed when
accessed. It is not stored. Different external schemas can be provided to different categories
of users. Translation from external level to conceptual level is done automatically by DBMS
at run time. The conceptual schema can be changed without changing application:

• Mapping from external to conceptual must be changed.


• Referred to as conceptual data independence.

Data Model

Schema: description of data at some level (e.g., tables, attributes, constraints, domains)

Model: tools and languages for describing:

• Conceptual/logical and external schema described by the data definition language


(DDL)
• Integrity constraints, domains described by DDL
• Operations on data described by the data manipulation language (DML)
• Directives that influence the physical schema (affects performance, not semantics) are
described by the storage definition language (SDL)

Entity-Relationship Model

A semantic model, captures meanings

E-R modeling is a conceptual level model

Proposed by P.P. Chen in 1970s

• Entities are real-world objects about which we collect data


• Attributes describe the entities
• Relationships are associations among entities
• Entity set – set of entities of the same type
• Relationship set – set of relationships of same type

Relationships sets may have descriptive attributes

Represented by E-R diagrams


Relational Model

Record- and table-based model

Relational database modeling is a logical-level model

Proposed by E.F. Codd

• Based on mathematical relations


• Uses relations, represented as tables
• Columns of tables represent attributes
• Tables represent relationships as well as entities

Successor to earlier record-based models—network and hierarchical

Object-oriented Model

Uses the E-R modeling as a basis but extended to include encapsulation, inheritance

Objects have both state and behavior

• State is defined by attributes


• Behavior is defined by methods (functions or procedures)

Designer defines classes with attributes, methods, and relationships

Class constructor method creates object instances

• Each object has a unique object ID


• Classes related by class hierarchies
• Database objects have persistence

Object-relational model

Adds new complex datatypes to relational model

Adds objects with attributes and methods

Adds inheritance

SQL extended to handle objects in SQL:1999

Semi-structured Model

Collection of nodes, each with data, and with different schemas

Each node contains a description of its own contents

Can be used for integrating existing databases

XML tags added to documents to describe structure


XML tags identify elements, sub-elements, attributes in documents

XML DTD (Document Type Definition) or XML Schema used to define structure

(Discussed later in the course in greater detail)

Hierarchical Model
The hierarchical data model organizes data in a tree structure. There is a hierarchy of parent
and child data segments. This structure implies that a record can have repeating information,
generally in the child data segments. Data in a series of records, which have a set of field
values attached to it. It collects all the instances of a specific record together as a record type.
These record types are the equivalent of tables in the relational model, and with the individual
records being the equivalent of rows. To create links between these record types, the
hierarchical model uses Parent Child Relationships. These are a 1:N mapping between record
types. This is done by using trees, like set theory used in the relational model, "borrowed"
from maths. For example, an organization might store information about an employee, such
as name, employee number, department, salary. The organization might also store
information about an employee's children, such as name and date of birth. The employee and
children data forms a hierarchy, where the employee data represents the parent segment and
the children data represents the child segment. If an employee has three children, then there
would be three child segments associated with one employee segment. In a hierarchical
database the parent-child relationship is one to many. This restricts a child segment to having
only one parent segment. Hierarchical DBMSs were popular from the late 1960s, with the
introduction of IBM's Information Management System (IMS) DBMS, through the 1970s.

Network Model
The popularity of the network data model coincided with the popularity of the hierarchical
data model. Some data were more naturally modeled with more than one parent per child. So,
the network model permitted the modeling of many-to-many relationships in data. In 1971,
the Conference on Data Systems Languages (CODASYL) formally defined the network
model. The basic data modeling construct in the network model is the set construct. A set
consists of an owner record type, a set name, and a member record type. A member record
type can have that role in more than one set, hence the multiparent concept is supported. An
owner record type can also be a member or owner in another set. The data model is a simple
network, and link and intersection record types (called junction records by IDMS) may exist,
as well as sets between them . Thus, the complete network of relationships is represented by
several pairwise sets; in each set some (one) record type is owner (at the tail of the network
arrow) and one or more record types are members (at the head of the relationship arrow).
Usually, a set defines a 1:M relationship, although 1:1 is permitted. The CODASYL network
model is based on mathematical set theory.

Relational Model
(RDBMS - relational database management system) A database based on the relational model
developed by E.F. Codd. A relational database allows the definition of data structures,
storage and retrieval operations and integrity constraints. In such a database the data and
relations between them are organised in tables. A table is a collection of records and each
record in a table contains the same fields.
Properties of Relational Tables:
 Values Are Atomic
 Each Row is Unique
 Column Values Are of the Same Kind
 The Sequence of Columns is Insignificant
 The Sequence of Rows is Insignificant
 Each Column Has a Unique Name
Certain fields may be designated as keys, which means that searches for specific values of
that field will use indexing to speed them up. Where fields in two different tables take values
from the same set, a join operation can be performed to select related records in the two
tables by matching values in those fields. Often, but not always, the fields will have the same
name in both tables. For example, an "orders" table might contain (customer-ID, product-
code) pairs and a "products" table might contain (product-code, price) pairs so to calculate a
given customer's bill you would sum the prices of all products ordered by that customer by
joining on the product-code fields of the two tables. This can be extended to joining multiple
tables on multiple fields. Because these relationships are only specified at retreival time,
relational databases are classed as dynamic database management system. The
RELATIONAL database model is based on the Relational Algebra.

Sequential file organization

A sequential file contains records organized by the order in which they were entered. The
order of the records is fixed.

Records in sequential files can be read or written only sequentially.

After you have placed a record into a sequential file, you cannot shorten, lengthen, or delete
the record. However, you can update (REWRITE) a record if the length does not change. New
records are added at the end of the file.

If the order in which you keep records in a file is not important, sequential organization is a
good choice whether there are many records or only a few. Sequential output is also useful
for printing reports.

Indexed sequential

An index file can be used to effectively overcome the above mentioned problem, and to speed
up the key search as well. The simplest indexing structure is the single-level one: a file whose
records are pairs key-pointer, where the pointer is the position in the data file of the record
with the given key. Only a subset of data records, evenly spaced along the data file, are
indexed, so to mark intervals of data records.

A key search then proceeds as follows: the search key is compared with the index ones to
find the highest index key preceding the search one, and a linear search is performed from the
record the index key points onward, until the search key is matched or until the record
pointed by the next index entry is reached. In spite of the double file access (index + data)
needed by this kind of search, the decrease in access time with respect to a sequential file is
significant.

Consider, for example, the case of simple linear search on a file with 1,000 records. With the
sequential organization, an average of 500 key comparisons are necessary (assuming
uniformly distributed search key among the data ones). However, using and evenly spaced
index with 100 entries, the number of comparisons is reduced to 50 in the index file plus 50
in the data file: a 5:1 reduction in the number of operations.

This scheme can obviously be hyerarchically extended: an index is a sequential file in itself,
amenable to be indexed in turn by a second-level index, and so on, thus exploiting more and
more the hyerarchical decomposition of the searches to decrease the access time. Obviously,
if the layering of indexes is pushed too far, a point is reached when the advantages of
indexing are hampered by the increased storage costs, and by the index access times as well.

S-ar putea să vă placă și