Sunteți pe pagina 1din 19

Linked list

From Wikipedia, the free encyclopedia


This article needs additional citations for verification. Please
help improve this article by adding citations to reliable sources. Unsourced
material may be challenged and removed. (February 2014)
In computer science, a linked list is a data structure consisting of a group of nodes which
together represent a sequence. Under the simplest form, each node is composed of a data and
a reference (in other words, a link) to the next node in the sequence; more complex variants add
additional links. This structure allows for efficient insertion or removal of elements from any
position in the sequence.

A linked list whose nodes contain two fields: an integer value and a link to the next node. The
last node is linked to a terminator used to signify the end of the list.
Linked lists are among the simplest and most common data structures. They can be used to
implement several other common abstract data types, including lists (the abstract data
type), stacks, queues, associative arrays, and S-expressions, though it is not uncommon to
implement the other data structures directly without using a list as the basis of implementation.
The principal benefit of a linked list over a conventional array is that the list elements can easily
be inserted or removed without reallocation or reorganization of the entire structure because the
data items need not be stored contiguously in memory or on disk, while an array has to be
declared in the source code, before compiling and running the program. Linked lists allow
insertion and removal of nodes at any point in the list, and can do so with a constant number of
operations if the link previous to the link being added or removed is maintained during list
traversal.
On the other hand, simple linked lists by themselves do not allow random access to the data, or
any form of efficient indexing. Thus, many basic operations such as obtaining the last node of
the list (assuming that the last node is not maintained as separate node reference in the list
structure), or finding a node that contains a given datum, or locating the place where a new node

should be inserted may require sequential scanning of most or all of the list elements. The
advantages and disadvantages of using linked lists are as follows:-

Advantages:

Linked lists are a dynamic data structure, allocating the needed memory while the program is
running.

Insertion and deletion node operations are easily implemented in a linked list.

Linear data structures such as stacks and queues are easily executed with a linked list.

They can reduce access time and may expand in real time without memory overhead.

Disadvantages:

They have a tendency to waste memory due to pointers requiring extra storage space.

Nodes in a linked list must be read in order from the beginning as linked lists are
inherently sequential access.

Nodes are stored incontiguously, greatly increasing the time required to access individual
elements within the list.

Difficulties arise in linked lists when it comes to reverse traversing. Singly linked lists are
extremely difficult to navigate backwards, and while doubly linked lists are somewhat easier
to read, memory is wasted in allocating space for a back pointer.

Linked Lists
Introduction
One disadvantage of using arrays to store data is that arrays are static structures and therefore
cannot be easily extended or reduced to fit the data set. Arrays are also expensive to maintain
new insertions and deletions. In this chapter we consider another data structure called Linked
Lists that addresses some of the limitations of arrays.
A linked list is a linear data structure where each element is a separate object.

Each element (we will call it a node) of a list is comprising of two items - the data and a
reference to the next node. The last node has a reference to null. The entry point into a linked list
is called the head of the list. It should be noted that head is not a separate node, but the reference
to the first node. If the list is empty then the head is a null reference.
A linked list is a dynamic data structure. The number of nodes in a list is not fixed and can grow
and shrink on demand. Any application which has to deal with an unknown number of objects
will need to use a linked list.
One disadvantage of a linked list against an array is that it does not allow direct access to the
individual elements. If you want to access a particular item then you have to start at the head and
follow the references until you get to that item.
Another disadvantage is that a linked list uses more memory compare with an array - we extra 4
bytes (on 32-bit CPU) to store a reference to the next node.

Types of Linked Lists


A singly linked list is described above
A doubly linked list is a list that has two references, one to the next node and another to
previous node.

Another important type of a linked list is called a circular linked list where last node of the list
points back to the first node (or the head) of the list.

The Node class


In Java you are allowed to define a class (say, B) inside of another class (say, A). The class A is
called the outer class, and the class B is called the inner class. The purpose of inner classes is
purely to be used internally as helper classes. Here is the LinkedList class with the inner Node
class
private static class Node<AnyType>
{
private AnyType data;
private Node<AnyType> next;

public Node(AnyType data, Node<AnyType> next)


{
this.data = data;
this.next = next;
}
}
An inner class is a member of its enclosing class and has access to other members (inclusing
private) of the outer class, And vise versa, the outer class can have a direct access to all members
of the inner class. An inner class can be declared private, public, protected, or package private.
There are two kind of inner classes: static and non-static. A static inner class cannot refer directly
to instance variables or methods defined in its outer class: it can use them only through an object
reference.
We implement the LinkedList class with two inner classes: static Node class and non-static
LinkedListIterator class. See LinkedList.java for a complete implementation.

Stack (abstract data type)


From Wikipedia, the free encyclopedia
"Pushdown" redirects here. For the strength training exercise, see Pushdown (exercise).
It has been suggested that this article be merged with LIFO (computing).
(Discuss)Proposed since August 2014.

This article includes a list of references, but its sources remain unclear
because it has insufficient inline citations. Please help to improve this
article byintroducing more precise citations. (September 2009)

Simple representation of a stack


In computer science, a stack is a particular kind of abstract data type or collection in which the
principal (or only) operations on the collection are the addition of an entity to the collection,
known as push and removal of an entity, known as pop.[1] The relation between the push and pop
operations is such that the stack is a Last-In-First-Out (LIFO) data structure. In a LIFO data
structure, the last element added to the structure must be the first one to be removed. This is
equivalent to the requirement that, considered as a linear data structure, or more abstractly a
sequential collection, the push and pop operations occur only at one end of the structure, referred
to as the top of the stack. Often a peek or top operation is also implemented, returning the value
of the top element without removing it.
A stack may be implemented to have a bounded capacity. If the stack is full and does not contain
enough space to accept an entity to be pushed, the stack is then considered to be in
an overflow state. The pop operation removes an item from the top of the stack. A pop either

reveals previously concealed items or results in an empty stack, but, if the stack is empty, it goes
into underflow state, which means no items are present in stack to be removed.
A stack is a restricted data structure, because only a small number of operations are performed
on it. The nature of the pop and push operations also means that stack elements have a natural
order. Elements are removed from the stack in the reverse order to the order of their addition.
Therefore, the lower elements are those that have been on the stack the longest.[2]
Contents
[hide]

1 History

2 Abstract definition

3 Inessential operations

4 Software stacks
o

4.1 Implementation

4.1.1 Array

4.1.2 Linked list

4.2 Stacks and programming languages

5 Hardware stacks
o

5.1 Basic architecture of a stack

5.2 Hardware support

5.2.1 Stack in main memory

5.2.2 Stack in registers or dedicated memory

6 Applications
o

6.1 Expression evaluation and syntax parsing

6.2 Backtracking

6.3 Runtime memory management

7 Security

8 See also

9 References

10 Further reading

11 External links
History[edit]
The stack was first proposed in 1946, in the computer design of Alan M. Turing (who used the
terms "bury" and "unbury") as a means of calling and returning from subroutines.[clarification
needed]

The Germans Klaus Samelson and Friedrich L. Bauer of Technical University

Munich proposed the idea in 1955 and filed a patent in 1957.[3] The same concept was
developed, independently, by the Australian Charles Leonard Hamblin in the first half of 1957.[4]
Abstract definition[edit]
A stack is a basic computer science data structure and can be defined in an abstract,
implementation-free manner, or it can be generally defined as a linear list of items in which all
additions and deletion are restricted to one end that is Top.
This is a VDM (Vienna Development Method) description of a stack:[5]
Function signatures:
init: -> Stack
push: N x Stack -> Stack
top: Stack -> (N U ERROR)
pop: Stack -> Stack
isempty: Stack -> Boolean
(where N indicates an element (natural numbers in this case), and U indicates set union)
Semantics:
top(init()) = ERROR
top(push(i,s)) = i
pop(init()) = init()
pop(push(i, s)) = s
isempty(init()) = true

isempty(push(i, s)) = false

Inessential operations[edit]
In many implementations, a stack has more operations than "push" and "pop". An example is
"top of stack", or "peek", which observes the top-most element without removing it from the
stack.[6] Since this can be done with a "pop" and a "push" with the same data, it is not essential.
An underflow condition can occur in the "stack top" operation if the stack is empty, the same as
"pop". Also, implementations often have a function which just returns whether the stack is
empty.
Software stacks[edit]

Implementation[edit]
In most high level languages, a stack can be easily implemented either through an array or
a linked list. What identifies the data structure as a stack in either case is not the implementation
but the interface: the user is only allowed to pop or push items onto the array or linked list, with
few other helper operations. The following will demonstrate both implementations, using C.

Array[edit]
The array implementation aims to create an array where the first element (usually at the zerooffset) is the bottom. That is, array[0] is the first element pushed onto the stack and the last
element popped off. The program must keep track of the size, or the length of the stack. The
stack itself can therefore be effectively implemented as a two-element structure in C:
typedef struct {
size_t size;
int items[STACKSIZE];
} STACK;
The push() operation is used both to initialize the stack, and to store values to it. It is responsible
for inserting (copying) the value into the ps->items[] array and for incrementing the element

counter ( ps->size ). In a responsible C implementation, it is also necessary to check whether the


array is already full to prevent an overrun.
void push(STACK *ps, int x)
{
if (ps->size == STACKSIZE) {
fputs("Error: stack overflow\n", stderr);
abort();
} else
ps->items[ps->size++] = x;
}
The pop() operation is responsible for removing a value from the stack, and decrementing the
value of ps->size . A responsible C implementation will also need to check that the array is not
already empty.
int pop(STACK *ps)
{
if (ps->size == 0){
fputs("Error: stack underflow\n", stderr);
abort();
} else
return ps->items[--ps->size];
}
If we use a dynamic array, then we can implement a stack that can grow or shrink as much as
needed. The size of the stack is simply the size of the dynamic array. A dynamic array is a very
efficient implementation of a stack, since adding items to or removing items from the end of a
dynamic array is amortized O(1) time.
Linked list[edit]
The linked-list implementation is equally simple and straightforward. In fact, a simple singly
linked list is sufficient to implement a stackit only requires that the head node or element can
be removed, or popped, and a node can only be inserted by becoming the new head node.

Unlike the array implementation, our structure typedef corresponds not to the entire stack
structure, but to a single node:
typedef struct stack {
int data;
struct stack *next;
} STACK;
Such a node is identical to a typical singly linked list node, at least to those that are implemented
in C.
The push() operation both initializes an empty stack, and adds a new node to a non-empty one.
It works by receiving a data value to push onto the stack, along with a target stack, creating a
new node by allocating memory for it, and then inserting it into a linked list as the new head:
void push(STACK **head, int value)
{
STACK *node = malloc(sizeof(STACK)); /* create a new node */

if (node == NULL){
fputs("Error: no space available for node\n", stderr);
abort();
} else {

/* initialize node */

node->data = value;
node->next = empty(*head) ? NULL : *head; /* insert new head if any */
*head = node;
}
}
A pop() operation removes the head from the linked list, and assigns the pointer to the head to
the previous second node. It checks whether the list is empty before popping from it:
int pop(STACK **head)
{
if (empty(*head)) {

/* stack is empty */

fputs("Error: stack underflow\n", stderr);


abort();
} else {

//pop a node

STACK *top = *head;


int value = top->data;
*head = top->next;
free(top);
return value;
}

Tree structure
From Wikipedia, the free encyclopedia

A tree structure showing the possible hierarchical organization of an encyclopedia.

The original Encyclopdie used a tree diagram to show the way in which its subjects were
ordered.

A tree structure or tree diagram is a way of representing thehierarchical nature of a structure in


a graphical form. It is named a "tree structure" because the classic representation resembles atree,
even though the chart is generally upside down compared to an actual tree, with the "root" at the
top and the "leaves" at the bottom.

A tree structure is conceptual, and appears in several forms. For a discussion of tree structures in
specific fields, see Tree (data structure) for computer science: insofar as it relates to graph
theory, see tree (graph theory), or also tree (set theory). Other related pages are listed below.
Contents
[hide]

1 Terminology and properties

2 Examples of tree structures

3 Representing trees
o

3.1 Classical node-link diagrams

3.2 Nested sets

3.3 Layered "icicle" diagrams

3.4 Outlines and tree views

3.5 Nested parentheses

3.6 Radial trees

4 See also

5 References

6 Further reading

7 External links
Terminology and properties[edit]
The tree elements are called "nodes". The lines connecting elements are called "branches".
Nodes without children are called leaf nodes, "end-nodes", or "leaves".
Every finite tree structure has a member that has no superior. This member is called the "root"
or root node. The root is the starting node. But the converse is not true: infinite tree structures
may or may not have a root node.
The names of relationships between nodes are modeled after family relations. The gender-neutral
names "parent" and "child" have largely displaced the older "father" and "son" terminology,
although the term "uncle" is still used for other nodes at the same level as the parent.

A node's "parent" is a node one step higher in the hierarchy (i.e. closer to the root node) and
lying on the same branch.

"Sibling" ("brother" or "sister") nodes share the same parent node.

A node's "uncles" are siblings of that node's parent.

A node that is connected to all lower-level nodes is called an "ancestor". The connected
lower-level nodes are "descendants" of the ancestor node.

In the example, "encyclopedia" is the parent of "science" and "culture", its children. "Art" and
"craft" are siblings, and children of "culture", which is their parent and thus one of their
ancestors. Also, "encyclopedia", being the root of the tree, is the ancestor of "science", "culture",
"art" and "craft". Finally, "science", "art" and "craft", being leaves, are ancestors of no other
node.
Tree structures are used to depict all kinds of taxonomic knowledge, such as family trees, the
biological evolutionary tree, the evolutionary tree of a language family, the grammatical
structure of a language (a key example being S NP VP, meaning a sentence is a noun phrase
and a verb phrase, with each in turn having other components which have other components), the
way web pages are logically ordered in a web site, mathematical trees of integer sets, et cetera.
In a tree structure there is one and only one path from any point to any other point.
Tree structures are used extensively in computer science (see Tree (data
structure) and telecommunications.)
For a formal definition see set theory, and for a generalization in which children are not
necessarily successors, see prefix order.
Examples of tree structures[edit]

A tree map used to represent adirectory structure as a nested set.

Internet:

usenet hierarchy

Document Object Model's logical structure,[1] Yahoo! subject index,DMOZ

Operating system: directory structure

Information management: Dewey Decimal System, PSH, this hierarchical bulleted list

Management: hierarchical organizational structures

Computer Science:

binary search tree

Red-Black Tree

AVL tree

R-tree

Biology: evolutionary tree

Business: pyramid selling scheme

Project management: work breakdown structure

Linguistics (syntax): Phrase structure trees

Sports: business chess, playoffs brackets

Mathematics: Von Neumann universe

Group theory: descendant trees

Node Representations
Usually the first step in representing a graph is to map the nodes to a set of contiguous integers.
(0,|V|-1) is the most convenient in C programs - other, more flexible, languages allow you
greater choice! The mapping can be performed using any type of search structure: binary
trees, m-way trees, hash tables, etc.
Adjacency Matrix
Having mapped the vertices to integers, one simple representation for the graph uses
an adjacency matrix. Using a |V| x |V| matrix of booleans, we set aij = true if an edge
connects i and j. Edges can be undirected, in which case if aij = true, then aji = true also,
or directed, in which aij != aji, unless there are two edges, one in either direction,

between i and j. The diagonal elements, aii, may be either ignored or, in cases such as state
machines, where the presence or absence of a connection from a node to itself is relevant, set
to true or false as required.
When space is a problem, bit maps can be used for the adjacency matrix. In this case, an ADT
for the adjacency matrix improves the clarity of your code immensely by hiding the bit twiddling
that this space saving requires! In undirected graphs, only one half of the matrix needs to be
stored, but you will need to calculate the element addresses explicitly yourself. Again an ADT
can hide this complexity from a user! If the graph is dense, iemost of the nodes are connected by
edges, then the O(|V|2) cost of initialising an adjacency matrix is matched by the cost of
inputting and setting the edges. However, if the graph is sparse, ie |E| is closer to |V|, then an
adjacency list representation may be more efficient.
Adjacency List Representation
Adjacency lists are lists of nodes that are connected to a given node. For each node, a linked list
of nodes connected to it can be set up. Adding an edge to a graph will generate two entries in
adjacency lists - one in the lists for each of its extremities.
Traversing a graph
Depth-first Traversal
A depth-first traverse of a graph uses an additional array to flag nodes that it has visited already.
Using the adjacency matrix structure:
struct t_graph {
int n_nodes;
graph_node *nodes;
int *visited;
adj_matrix am;
}

static int search_index = 0;

void search( graph g ) {


int k;
for(k=0;k<g->n_nodes;k++) g->visited[k] = FALSE;
search_index = 0;
for(k=0;k<g->n_nodes;k++) {
if ( !g->visited[k] ) visit( g, k );
}
}
The visit function is called recursively:
void visit( graph g, int k ) {
int j;
g->visited[k] = ++search_index;
for(j=0;j<g->n_nodes;j++) {
if ( adjacent( g->am, k, j ) ) {
if ( !g->visited[j] ) visit( g, j );
}
This procedure checks each of the |V|2 entries of the adjacency matrix, so is clearly O(|V|2).
Using an adjacency list representation, the visit function changes slightly:
struct t_graph {
int n_nodes;
graph_node *nodes;
AdjListNode *adj_list;
int *visited;
adj_matrix am;
}

void search( graph g ) {


... /* As adjacency matrix version */

void visit( graph g, int k ) {


AdjListNode al_node;
g->visited[k] = ++search_index;
al_node = ListHead( g->adj_list[k] );
while( n != NULL ) {
j = ANodeIndex( ListItem( al_node ) );
if ( !g->visited[j] ) visit( g, j );
al_node = ListNext( al_node );
}
}
Note that I've assumed the existence of a List ADT with methods,

ListHead,

ListItem,

ListNext

and also a AdjListNode ADT with a

ANodeIndex

method.
The complexity of this traversal can be readily seen to be O(|V|+|E|), because it sets visited for
each node and then visits each edge twice (each edge appears in two adjacency lists).
Breadth-first Traversal
To scan a graph breadth-first, we use a FIFO queue.
static queue q;
void search( graph g ) {
q = ConsQueue( g->n_nodes );
for(k=0;k<g->n_nodes;k++) g->visited[k] = 0;

search_index = 0;
for(k=0;k<g->n_nodes;k++) {
if ( !g->visited[k] ) visit( g, k );
}

void visit( graph g, int k ) {


al_node al_node;
int j;
AddIntToQueue( q, k );
while( !Empty( q ) ) {
k = QueueHead( q );
g->visited[k] = ++search_index;
al_node = ListHead( g->adj_list[k] );
while( al_node != NULL ) {
j = ANodeIndex(al_node);
if ( !g->visited[j] ) {
AddIntToQueue( g, j );
g->visited[j] = -1; /* C hack, 0 = false! */
al_node = ListNext( al_node );
}
}
}
}

Matrix Representation
Matrix Rep Matrix Rep. Same basics as introduced already Same basics as introduced already.
Convenient method of working with vectors.
Superposition Complete set of vectors can be used to
express any other vector.
Complete set of N vectors can form other complete sets of N vectors.

Can find set of vectors for Hermitian operator satisfying Can find set of vectors for Hermitian
operator satisfying
Eigenvectors and eigenvalues

Matrix method Find superposition of basis states that are


eigenstates of particular operator. Get eigenvalues.
Orthonormal basis set in N dimensional vector space

S-ar putea să vă placă și