Sunteți pe pagina 1din 11

EACT 633 NEURAL NETWORKS - QUESTION BANK FOR MID SEM EXAM

1. What is 100 step rules? Mention the Neural network characteristics adapted from biology.
Ans: 100 discrete time steps of parallel processing/ massive parallel processing, then the 100-
step rule.
main characteristics we try to adapt from biology:
 Self-organization and learning capability
 Generalization capability and
 Fault tolerance.
2. Mention the different components of neuron and work done by them.
Ans: Dendrite: receive electrical signals from many different sources, which are then
transferred into the nucleus of the cell.
Cell body: cell nucleus (soma) has received a plenty of activating/sums incoming signal.
Axon: The pulse is transferred to other neurons by means of the axon.
3. How a biological neuron is activated? Explain
Ans: biological point of view the threshold value represents the threshold at which a neuron
starts firing.
4. Define the concept of time and activation function of a neuron. Mention various activation
functions used in ANN.
Ans: The current time (present time) is referred to as (t), the next time step as (t+1), the
preceding one as (t−1). activation functions are generalized to neuron behaviors.
Common activation functions
 binary threshold function
 Heaviside function
 Fermi function or logistic function
 hyperbolic tangent
 output function
5. What is a feed forward network? Explain with an example.
Ans: The neuron layers of a feedforward network.
The neurons are grouped in the following layers:
 Input layer,
 Hidden&
 Output layers.
Example: Hinton diagram
6. What are recurrent networks?
Ans: defined as the process of a neuron influencing itself by any means or connection.
7. Define bias neuron.
Ans: bias neuron is a neuron whose output value is always 1 and which is represented by
BIAS .
8. Would it be useful (from your point of view) to insert one bias neuron in each layer of a layer-
based network, such as a feed forward network? Discuss this in relation to the representation
and implementation of the network. Will the result of the network change?
Ans: the bias neuron is omitted for clarity. but we know that it exists and that the threshold
values can simply be treated as weights because of it. a bias neuron was implemented instead
of neuron-individual biases. The neuron index of the bias neuron is 0. No change
9. Show for the Fermi function f(x) as well as for the hyperbolic tangent tanh(x),that their
derivatives can be expressed by the respective functions themselves so that the two statements
1. f(x) = f(x) ·(1 -f(x)) and
2. tanh(x) = 1 - tanh2(x)
are true.ANS:1

ANS:2

=1-h(x)+h(x)-h^(x)
=1-h^(X)
10. Mention the different paradigms of learning.
Ans: 1. developing new connections,
2. deleting existing connections,
3. changing connecting weights,
4. changing the threshold values of neurons,
5. varying one or more of the three neuron functions.
6. developing new neurons.
7. deleting existing neurons.
11. What is unsupervised learning and supervised learning?
Ans: unsupervised: The training set only consists of input patterns.
supervised the network tries by itself to detect similarities and to generate pattern classes.
12. Define reinforcement learning.
Ans: The training set consists of input patterns; after completion of a sequence a value is
returned to the network indicating whether the result was right or wrong.
13. What is offline and online learning? How can the learned patterns be stored in the network?
Ans: offline Several training patterns are entered into the network at once, the errors are
accumulated and it learns for all patterns at the same time.
Online learning. The network learns directly from the errors of each training sample.
14. Define training set, training pattern and teaching input.
Ans: A training pattern is an input vector p with the components p1, p2, . . . , p n.
teaching input tj is the desired and correct value j should output after the input of a certain
training pattern. Teaching input t1.t2-----------t n
15. What is error vector? Define specific error, Euclidian distance, root mean square and total
error.
Ans:Error vector: For several output neurons Ω1,Ω2,...,Ωn.

The specific error Errp is based on a single training sample, which means it is generated online.
Euclidean distance. The Euclidean distance between two vectors t and y is defined as
Root mean square. The root mean square of two vectors t and y is defined as

Total error. The total error Err is based on all training samples, that means it is generated
offline.

16. Define gradient and gradient descent. Explain gradient descent learning algorithm.
Ans: Gradient is defined for any point of a (differential) n-dimensional function f(x1,x2,...,xn).
The gradient operator notation is defined as g(x1,x2,...,xn)=∇f(x1,x2,...,xn).
Gradient descent means going from f(s) against the direction of g, i.e. towards −g with steps of
the size of |g| towards smaller and smaller values of f.
17. Define hebbian learning rule.
Ans: "If neuron j receives an input from neuron i and if both neurons are strongly active at the
same time, then increase the weight wi,j (i.e. the strength of the connection between i and j)."
∆wi,j ∼ ηoiaj
18. Calculate the average value µ and the standard deviation σ for the following data points.
p1 = (2,2,2) p2 = (3,3,3) p3 = (4,4,4) p4 = (6,0,0) p5 = (0,6,0) p6 = (0,0,6)
Ans
:
19. Draw the architecture of a single layer.

20. Define information processing neuron.


Ans: Information processing neurons somehow process the input information, i.e. do not
represent the identity function.
21. Define perceptron. Write perceptron learning algorithm
Ans: The perceptron a feedforward network containing a retina that is used only for data
acquisition and which has fixed-weighted connections with the first neuron layer (input layer).
The perceptron learning algorithm reduces the weights to output neurons that return 1 instead of
0, and in the inverse case increases weights.
22. Define error function and delta rule
Ans: Error function: The error function Err: W →R
Delta rule: delta rule, also known as Widrow-Hoff rule:

23. Draw the architecture of multilayer perceptron


24. Explain concept of back propagation of error
Ans: which can be used to train multi-stage perceptron’s with semi-linear 3 activation functions.
25. Draw a neural network realizing XOR function

26. Define learning rate


Ans: Speed and accuracy of a learning procedure can always be controlled by and are always
proportional to a learning rate which is written as η.
27. A simple 2-1 network shall be trained with one single pattern by means of back propagation of
error and η = 0.1. Verify if the error Err= Errp= 1/2 (t-y)2 converges and if so, at what value.
How does the error curve look like? Let the pattern (p,t) be defined by p = (p1,p2) = (0.3,0.7)
and tΩ = 0.4. Randomly initalize the weights in the interval [1;-1].
Ans:
28. Calculate in a comprehensible way one vector ∆W of all changes in weight by means of the back
propagation of error procedure with η = 1. Let a 2-2-1 MLP with bias neuron be given and let
the pattern be defined by p = (p1,p2,tΩ) = (2,0,0.1).For all weights with the target Ω the initial
value of the weights should be 1. For all other weights the initial value should be 0.5. What is
conspicuous about the changes?

29. Write the training algorithm for ADALINE


30. Derive delta rule used in ADALINE.
31. Construct ADALINE network to implement AND NOT gate.

32. mention the properties of ANN

33.Draw different network topologies used in ANN

FFW TOPOLOGY COMPLATIED LINKED


RECURENTS NETWORKS
34. What is a hop field network?
Ans: A Hopfield network consists of a set K of completely linked neurons without direct
recurrences.
35. Explain how a MEDALINE is trained for XOR function.

S-ar putea să vă placă și