Sunteți pe pagina 1din 18

European Journal of Scientific Research ISSN 1450-216X Vol.27 No.2 (2009), pp.199-216 EuroJournals Publishing, Inc. 2009 http://www.eurojournals.com/ejsr.

htm

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
Cyril Prasanna Raj P VLSI System Design Centre, MSR School of Advanced Studies, Bangalore E-mail: cyril@msrsas.org Tel: +91-80-23605539; Fax: +91-80-23601983 S.L. Pinjare VLSI System Design Centre, MSR School of Advanced Studies, Bangalore E-mail: sl_pinjare@yahoo.com Tel: +91-80-23605539; Fax: +91-80-23601983 Abstract Biological systems process the analog signals like image and sound efficiently. To process the information the way biological systems do, we make use of Artificial Neural Networks(ANN). The focus of this paper is the implementation of the Neural Network Architecture(NNA) with on chip learning in analog VLSI for generic signal processing applications. The artificial neural network architecture comprises of analog components like multipliers and addders along with the tan-sigmoid function circuit. The proposed neural architecture is trained using Back Propagation (BP) algorithm in the analog domain. New techniaues for weight storage with refresh is proposed. The neural architecture is thus a complete analog structure. The multiplier block is implemented using gilbert cell, the tansig function is realized using MOS transistor. The functionality of the designed neural architecture is verified for analog operation like amplification and frequency multiplication. The netowk designed is adopted for image compression in analog domain. The output level swings achieved for the designed neural architecture are 2.8 Vpp max for 3 V voltage supply. The circuit converged for 10 MHz signal within 200 ns. Neural architecture is also verified for Digital operations like AND, OR, NOT and XOR. The network realizes its functionality for the trained targets, which is verified using simulation results. The network designed is extended for image compression in analog domain. 50% image compression is achieved using the proposed neural network architecture. Layout design and verification of the proposed design is carried out using cadence virtuoso and synopsys hspice. The chip dimensions are 150m2. Keywords: Neural Architecture (NA), Back Propagation Algorithm (BPA), Neural Network

1. Introduction
When we speak of intelligence it is actually acquired, learned from the past experiences. This intelligence though a biological word, is realized based on the mathematical equations, giving rise to the science of Artificial Intelligence (AI). To implement this intelligence artificial neurons are used.

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing

200

These artificial neurons, in this paper are realized by Analog components like multipliers, adders, differentiators and memories.
Figure 1: 2 input to 1 output Neuron
2 Multipliers Input 1 Weight 1 Input 2 Tansig Function Weight 1 Adder Output 1

The neural networks shown Figure 1, can be classified in terms of there implementation into three categories; Digital, Analog or Hybrid. The implementation of the neural architecture (NA) in all these categories requires learning capability to be integrated in the design. This learning capability or learning rules are based on the mathematical algorithms representing specific applications. It is this implementation of the neuron and leaning algorithm that makes it Digital, Analog or Hybrid. The focus of this paper is to implement the neural architecture with back propagation learning/training algorithm for data compression. The neuron selected in this paper comprises of multiplier and adder along with the tan-sigmoid function [1]. The back propagation training algorithm is performed in the analog domain. This neural architecture is thus a complete analog structure. Figure 1 can be expressed mathematically as n = i1* w1 + i 2 * w2 (1) a = tan sig (n + bias) (2) where a is the output of the neuron and n is the intermediate output for the inputs i and neuron weights w. The bias is optional and user defined. Training of the network to realize functionality is achieved through a known input and known target for the input, the initial weights and bias of the network is assumed some constants. The learning algorithms compute the differences of the neuron output obtained as per equation 1 and 2 with the target. This error is back propagated to update the weight and the bias elements. This process is continued until the error reaches a permissible set limit. To design and implemented a neural architecture in analog VLSI equation 1 and 2 are to be realized using analog circuits. Analog implementation of the neural network reduces circuit complexity and also avoids the conversion of real time analog samples to digital signal. The neural architecture (NA) designed in this paper is a 2 input 1 output neuron with three hidden layer neurons. The NA is feed forward network and the learning algorithm used is back propagation realized in the analog domain. The neuron designed in this paper has learning capabilities for both digital and analog application. To validate the digital learning capabilities of the NA, logic functions like AND, OR, XOR and NOT gate are implemented using proposed neural architecture. The analog operations like sine wave learning, amplification and frequency multiplication capability of the neural network is also proven. The NA is also used for the image compression and decompression considering a small size input pixel intensity matrix. This neural architecture can be used for many of the analog signal processing activity.

2. Multiple Layers of Neurons


The set of single layer neurons connected with each other is called the multiple layer neurons, as shown in the figure 2.

201

Cyril Prasanna Raj P and S.L. Pinjare


Figure 2: Multiple Layers Neural Network

Two inputs v1 and v2 are connected to the neurons in the hidden layer through weights w11 to w16. The outputs of the hidden layer are connected to the output layer through weights w21 to w23. The final output is a21. 2.1. Back Propagation Algorithm The essence of the neural networks lies in the way the weights are identified and used in the network through a definite algorithm to realize functionality. In this paper Back Propagation (BP) algorithm is adopted and implemented [1][2][3][4] in the supervised learning method. The target is represented as di (desired output) for the ith output unit see figure 2. The actual output of the layer is given by ai. Thus the error or cost function is given by 1 (3) E = (a 2 d i)2 i 2 This process of computing the error is called a forward pass. How the output unit affects the error in the ith layer is given by differentiating equation 3 by ai E (4) = ( a 2i d i ) ai The equation 4 can be written in the other form as

i = (a2 di )d (a2 ) i i

(5)

Where d(ai) is the differentiation of the ai. The weight update is given by w = a1 (6) ij i i Where a1i is the output of the hidden layer or input to the output neuron and is the learning rate [1]. This error has to propagate backwards from the output to the input. The for the hidden layer is calculated as

hiddenlaye = d(a1 )wiji r i

(7)

Weight update for the hidden layer with new, will be done using equation 5. Equations 3-7 depend on the number of the neurons present in the layer and the number of layers present in the network.

3. Analog Components for Neural Architecture


The inputs to the neuron as shown in figure 2 are multiplied by the weight matrix, the resultant output is summed up and is passed though an neuron activation function (NAF). The output obtained from the activation function is taken through the next layer for further processing. The multiplier block, adder

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing

202

block and the activation function model the artificial neural network. The NAF block provides a derivative function of the output to implement equation 5. Blocks to be used are as follows 1. Multiplication block 2. Adders 3. NAF block with derivative 3.1. Multiplier Block (mult) The Gilbert cell is used as the multiplier block. The schematic of the Gilbert cell is as shown in the figure 3.
Figure 3: Gilbert cell schematic

The Gilbert cell works in the subthreshold region. The current for the NMOS transistor to work in the sub threshold region is given by the following equations [12]
q[V g nVs ] nKT I =I e ds o qV ds (1 e nKT )

(8)

where all the voltages Vg, Vs and Vd are taken with respect to the bulk voltage Vb. KT/q= 25mA at room temperature. n = 1.2 to 1.6 slope factor.
2 KT W I = 2nC L 0 ox q
q[ V g + nV s ] nKT

V ( t 0 )q nKT e

(9)

The current equation for PMOS is same as 8 but all the voltages have opposite signs
I =I e ds o qVds (1 e nKT )

(10)
qV ds / KT 4

Considering equation 8, the current IDS is independent of Vds (saturates) when other words Vds is equal to 100mV. So in equation 8 the term
I ds =I e o q[Vg nVs ] / nKT

, or in (11)

qVds / KT

is approximately zero.

Equation 11 is the saturation current in the subthreshold region as it reveals that the Ids is theoretically independent of the Vds. Now considering each transistor in the saturation region one can design circuit for the Gilbert cell multiplier. Ids as 2 mA. (W/L)9=44.5 (W/L)7-8=4 (W/L)3,4,5,6 =2. (W/L) 1-2=120.7

203 3.2. Adders

Cyril Prasanna Raj P and S.L. Pinjare

The output of the Gilbert cell is in the form of current (transconductance). The node connecting the respective outputs of the Gilbert cell, act as adder itself. 3.3. Neuron Activation Function (NAF) Neuron activation function designed here is tan sigmoid. The design is basically a variation of the differential amplifier with modification for the differentiation output. The same circuit should be able to output the neuron activation function and the differentiation of the activation function. Here two designs are considered for NAF 1. Differential amplifier as NAF 2. Modified differential amplifier as NAF with differentiation output. 3.3.1 Differential Amplifier Design As A Neuron Activation Function (Tan) This block is named as tan in the final schematics for Neural Architecture. Differential amplifier when design to work in the subthreshold region acts as a neuron activation function. To understand this, consider a simple differential pair shown in the figure 4
Figure 4: Simple differential amplifier

Now the currents in the subthreshold region are given by equation 11 and assuming source and base to be shorted and both transistors have same W/L q[V2 V1 ] 2 (12) 2nKT
I out = Ib Io 1 e 1+ e 2 q[V2 V1 ] 2nKT

Thus
I out q[V2 V1 ] = I b I o tanh 2nKT

(13)

Equation 13 proves the functionality of the differential amplifier as a tan sigmoid function generator. As is evident from equation 13 Iout is the combination of bias current and the voltage input. Thus this can also be used as a multiplier when one input is current and the other is voltage. Designing for the bias current of 150 nA (W/L)5 =3.3. M3-4 = 2 (W/L)1-2 =1.5. 3.3.2. Modified Differential Amplifier Design For Differentiation Output (fun) Schematic of the design shown in the figure 4 is used for the tan sigmoid function generator with modification for the differential output. This block is named as fun in the final schematic. The NAF function can be derived from the same differential pair configuration. The structure has to be modified for the differentiation output. The differentiation of the activation function equation 13 is actually a sech2 (x). The current for the differentiation output is q[V2 V1 ] (14) I = I I sec h 2
d
b o

2nKT

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing

204

The schematic of the design is shown in the figure 5. The circuit is designed to be functional in the subthreshold region. (W/L)1-2 is 16. (W/L)3,6,7,10=4 and (W/L)4,5,8,9=16 (W/L)=2.

4. Realisation of Neural Architecture using Analog Components


The components designed in the previous section are used to implement the neural architecture. The tan block is the differential amplifier block designed in section 3.3.1. This block is used as the neuron activation function as well as for the multiplication purpose. The mult is the Gilbert cell multiplier designed in section 3.1 The fun is the Neuron activation function circuit with differentiation output designed in section 3.3.2
Figure 5: Neuron Activation Function circuit

Figure 6: Implementation of the Neural Architecture using Analog Blocks

Figure 6 shows exactly how the neural architecture of figure 2 is implemented using analog components. The input layer is the input to the 2:3:1 neuron. The hidden layer is connected to the input layer by weights in the first layer named as w1i. The output layer is connected to input layer through weights w2j. The op is the output of 2:3:1 neuron. 4.1. Back Propagation Algorithm and Weight Updating The training is important part in the Neural Architecture. In equation 1 3 Iout is the multiplication of the input applied to the differential amplifier and the bias current of the amplifier. On the same basis the differentiation current is multiplied to the target and output difference, implementing the equation 5 for

205

Cyril Prasanna Raj P and S.L. Pinjare


Figure 7: Block diagram for weight update scheme for the output neuron

. The , i = ( ai d i )d ( ai ) . Next step is to calculate the weight update using the equation 6 w = a ij i input obtained, is then multiplied with the outputs of the hidden layer (input for the output layer) as shown in the figure 7. The output from the mult blocks is weight update for the weights in the output layer. 4.1.1 Updating the Hidden Layer Weights The hidden layer weights in the architecture are updated from the errors propagating from the output layer.
Figure 8: Block diagram for weight update scheme for hidden layer neuron

This update requires the realization of the equation 7 hiddenlayer = d ( ai ) wij i , which deals with the formation for the hidden layer,

has to be formed for each neuron in the hidden layer.

1, 2,3 is

formed

considering the weight, the output of neuron is connected to as shown in the figure 8. The output of the multiplication is then given to the differential amplifier with the bias current as the differentiation of the respective neuron output in the hidden layer. The formed is then used to update the weights in the hidden layer as implied by the equation 6, w = a . The ainput is the input to the hidden layer,
ij i input

in this case the inputs v1 and v2. 4.2. Weight Storage And Update Mechanism The weights for the proposed neural architecture are stored on a capacitor. The figure 9 shows the update mechanism and initializations of the weights.

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
Figure 9: Weight Update and Initialisation Scheme

206

Cw is used to store the weight and Cwd is used to store the weight update. Clock signal ClkW, is used for updating the weight. Whenever the clock is high the weight is updated, else there is no update and previous value of the weight is maintained. The weight initialisation can also be done external to the chip, using clock ClkI. Whatever voltage is applied to weight initialisation line, it is given to the Cw when ClkI is high. One has to make ClkI low before starting to train chip. 4.3. Neuron Application -Image Compression and Decompression The network architecture proposed and designed in the previous section is used to compress image. Image consisting of pixel intensities are fed to the network shown in Figure 10 for compression and decompression. The 2:3:1 neuron proposed has an inherent capability of compressing the inputs, as there are two inputs and one output. The compression achieved is 50%. Since the inputs are fed in the analog form to the network there is no need for analog to digital converters. This is one of the major advantages of this work. A 1:3:2 neural networks is designed for the decompression purpose. The neural network has 3 neurons in the hidden layer and two in the output layer. Figure 10 shows the compression and decompression scheme. The training algorithm used in this network is Back Propagation algorithm. The error propagates from decompression block to the compression bock. Once the network is trained for different inputs the two architectures are separated and can be used as compression block and decompression block independently.
Figure 10: Image Compression and Decompression using proposed Neural architecture

5. Results and Discussions


5.1. Simulation Result for Gilbert cell Multiplier The designed Gilbert cell is simulated using HSPICE. The simulation result shown in the figure 11 is for the multiplication of two voltages v1 and v2. v1 voltage is .2V pp and 10 mega Hz frequency. v2 is .2V pp 1 mega Hz frequency.

207

Cyril Prasanna Raj P and S.L. Pinjare


Figure 11: Multiplication operation of Gilbert cell multiplier (mult)

The wave vout is the multiplication of v1 and v2 voltage done by the circuit. The output amplitude is 1.5mV pp. The vout can be seen matching with the theoretical output.
Figure 12: DC characteristics of Gilbert cell multiplier

The input voltages v1 and v2 are varied from -.4v to .4v. The characteristic shows a maximum of 2mV output for .4-multiplication output. The output results suits the neuron architecture. 5.2. Simulation Result for Neuron activation function The neuron activation function is basically a tanh (x) function and differentiation

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
Figure 13: (a) y=tanh(x) (b) Derivative of y

208

is sech2(x). The theoretical value of y=tanh (x) and its derivative

y = sec h 2 ( x ) x

is shown in the figure 13.

Figure 14: Circuit output for Neuron Activation function block (tan)

The y value varies from 1 to -1. The differential amplifier circuit designed to generate this function is designed for the values 3 to 3. The result of Neuron activation block
Figure 15: Neuron activation function and its derivative-DC analysis (fun)

209

Cyril Prasanna Raj P and S.L. Pinjare

designed for generating the differentiation circuit is shown in the figure 15. The Neuron activation function output is the current variation from 0 to 140 mA. The designed circuit gives the differentiation if the activation function. One can find the similarity between the figures 13, 14 and 15. The differentiation output bumps from 0uA to 6uA. The transient simulation for the sine wave is shown in the figure 16
Figure 16: Neuron Activation Function and Derivative Transient analysis

The figure 16 shows the output of the neuron activation circuit for a sine wave with .5v voltage and frequency of 10 mega Hz applied as the input. The NAF output and corresponding differentiation output is shown. The derivation of sine is a cosine. The derivative output is shown with the theoretical derivative output calculated by HSPICE tool, thus validating the result. 5.3. Simulation of 2:3:1 neural architecture Neural Architecture functionality was validated for both digital and the analog operation. The Neural Architecture functionality was verified for logic gates like AND, OR, XOR and NOT. Figure 17 shows the AND operation learned by the 2:3:1 Neural Architecture. The input voltages v1 and v2
Figure 17: AND operation learned by 2:3:1 Neural Architecture

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing

210

are given to the architecture along with the target. The input voltages swing form 1 V to 1 V. Target given to the circuit also varies from 1 V to 1V. The output generated by the neuron, clearly follows the target. The output of the neural architecture swings from .726 V to 3.26 V (1.23 V swing). The weights are initialised to the value 1 V. Figure 18 shows a OR gate function of the Neural Architecture
Figure 18: OR operation learned by 2:3:1 Neural Architecture

The weights for the OR operation are initialised to the value 1 V. The input voltages v1 and v2 swing form 1 V to 1 V for OR operation. Target given to the circuit for OR also varies from 1 V to 1V. The output generated by the neuron, as shown in figure 18, follows the target producing an output swing of 2.4 V. The weights are initialised to the value 1 V. Figure 19 shows the XOR operation of the Neural Architecture. The inputs v1 and v2 for this operation were chosen of .5 Vpp. The target also was given as .5 Vpp. The XOR operation output of the Neural Architecture showed a voltage swing of 1.19 V. The weights were initialised to .02 V. The convergence of the output is depends on the weight initialisation and the output swings of the components designed to implement the Neural Architecture. The effect of the weights initialisation and the output swing of the multiplier block on the convergence is shown in the figure 20. All the weights were initialised to 2 volt. The input voltage swing was 2 Vpp from 0 V to 2 V. The output did not converge. The reason for such operation is the generation of huge error as the derivative function gives a high output. This error is actually the difference between the target input and the network output. Since the output of the multiplier is huge, the error will be large too. This leads the network to fall in local minima.
Figure 19: XOR operation learned by 2:3:1 Neural Architecture

211

Cyril Prasanna Raj P and S.L. Pinjare

The result is the output shown in the figure 20. To avoid such behavior of the neural architecture
Figure 20: Effect of weight initialisation on convergence

the weights should be initialised to a lower value and Gilbert cell should be designed for low output swings.
Figure 21: NOT operation learned by 2:3:1 Neural Architecture

Figure 21 shows the NOT operation learned by the neural Architecture. The voltage was applied to only one input v1, 1 V to 0 V. Second input was connected to ground. The target was applied with 1 V swing, 0-1V. The NOT operation output had a voltage swing of .546V. 5.3.3. Validation for Analog operation Neural Architecture was designed with analog components. The simulation result for the sine wave learning is shown in the figure 22

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing
Figure 22: 50 KHz Sine Wave Learning By Neural Architecture

212

The analog input given to the network was a sine wave with v1 applied as .5 Vpp and frequency 50KHz and v2 was connected to ground. The network was trained for the sine wave output with same frequency and amplitude. As can be seen in the figure 22
Figure 23: 10 MHz Sine Wave Learning By Neural Architecture

the output clearly follows the target applied for the circuit for learning. In a way the network learned to replicate the input sine wave as an output. The network was made to learn 10 MHz frequency sine wave with. 5 V pp amplitude shown in figure 23. The network faithfully reproduced the desired target of 10 MHz frequency. The output swing was from +.5 V and -.5 V. The convergence time for 10 MHz was calculated as 200 ns with 1% error with respect to amplitude. The Neural network was experimented for generation of sine wave frequency and amplitude, greater than the input signal. Figure 24 shows the result for the 100 KHz output generated from a 50 KHz input. The input v1 was a sine wave with 50 KHz frequency and .5 Vpp amplitude.

213

Cyril Prasanna Raj P and S.L. Pinjare


Figure 24: Generation of 100 KHz from 50 KHz

The target for the output was set with a sine wave with 1.5 Vpp amplitude and 100 KHz frequency, testing the amplification and frequency multiplication capability of the Neural architecture. The output shown in the figure 24 is the sine wave of the 100 KHz frequency and amplitude of 1.51 Vpp, validating the capability of the network to work as amplifier and frequency multiplier. 5.4. Image Compression and Decompression using Neural Architecture The Neural Architecture is extended for the application of image compression and decompression. The simulation result for image compression and decompression are shown in the figure 25. The input v1 was a sine wave with 1 Vpp voltage 5 MHz frequency and v2 was a sine wave with .5 Vpp voltage and 10 MHz frequency. The compressed output was a DC signal of 233.63 nV.
Figure 25: Image compression and Decompression Simulation

The decompressed output is shown in the same window figure 25. The decompressed output for v1 was a 1.2 Vpp sine wave with 5 MHz frequency and v2 was a .51 Vpp sine wave with 10 MHz frequency. As there is one output for 2 inputs there is a 50% compression.

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing 5.5. Layout drawn for the Design The layout was drawn for the design in .35 micron technology using Virtuso
Figure 26: Layout for 2:3:1 Neural Architecture without IO pads

214

The layout was drawn for the design keeping in mind the usage of the block in other neural networks. The layout used only 2 layers of Metal 1 and 2, so that when used in other architecture there are more usable layers for routing. The chip dimensions were 150m2. 5.6. Summary of Results obtained
Table 1: Summary of Results Value 3V 3V 1.4V (max input) 2.5 (max input) 2.5 (max input) 1 A (max input) 3V 2.8 200 ns (1% error) Analog Digital and Analog.

Parameter Power supply Input Range for Gilbert cell Output Swing for Gilbert cell Output Range of NAF (tan) Output Range of NAF (fun) Differentiation output of NAF (fun) Input Range for Neural Architecture Output Swing of Neural Architecture Convergence time (10 MHz) Usability

Table 1 describes the summary of the simulation results obtained for the different blocks designed. The Neural Architecture designed was able to learn for both analog and digital input. The convergence was verified for the analog input sine wave of 10 MHz and amplitude of .5 V. The result is obtained using HSPICE.

215

Cyril Prasanna Raj P and S.L. Pinjare Gilbert cell multiplier was designed with input range and maximum output swing of 1.4V. Neuron Activation function was designed for input range of 3V and output range of 2.5 V. Maximum differentiation current output range 1 microamperes. A Neural architecture was proposed using these components. The Neural Architecture works on the supply voltage 3 V with the output swing of 2.8V. Back Propagation algorithm was used for the training of the network. The designed Neural architecture had a convergence time of 200 ns for analog input with 1% error. The Neural network was shown to be useful for digital and analog operations. The architecture proposed can be used with other existing architecture for neural processing. 50% image compression was achieved using proposed Neural Architecture

6. Conclusion

Design and Analog VLSI Implementation of Neural Network Architecture for Signal Processing

216

References
[1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] Bose N. K., Liang P., Neural Network Fundamentals with graphs, algorithms and Application, Tata McGraw hill, New Delhi, 2002, ISBN 0-07-463529-8 Razavi Behzad, Design of Analog CMOS Integrated Circuits, Tata McGrawhill, New Delhi, 2002, ISBN 0-07-052903-5 Bernabe Linares-Barranco et al.,A Modular T-Mode Design Approach for Analog Neural Network Hardware Implementations, IEEE Journal of Solid-state Circuits. Vol. 27, no. 5, May 1992, pp. 701-713 Hussein CHIBLE, Analysis And Design Of Analog Microelectronic Neural Network Architectures With On-Chip Supervised Learning Ph.D. Thesis in Microelectronics, University of Genoa, 1997 Isik Aybay et al, Classification of Neural Network Hardware, Neural Network World, IDG Co.,Vol 6 No 1, 1996, pp. 11-29 Vincent F. Koosh Analog Computation and Learning in VLSI PhD thesis California institute of technology, Pasadena, California.2001 Roy Ludvig Sigvartsen, An Analog Neural Network with On-Chip Learning Thesis Department of informatics, University of Oslo, 1994 Chun Lu, Bing-xue Shi and Lu Chen, Hardware Implementation of an Analog Accumulator for On-chip BP Learning Neural Networks Institute of Microelectronics, Tsinghua University Beijing, China 2002 Arne Heittmann, An Analog VLSI Pulsed Neural Network for Image Segmentation using Adaptive Connection Weights Dresden University of Technology, Department of Electrical Engineering and Information Technology,Dresden, Germany, 2000 Shai, Cai-Qin. Geiger, Randy L. A 5-v CMOS Analog Multiplier IEEE Journal of solid state circuits Vol sc22 No.6 December 1987, pp. 1143-1146 Andreas G. Andreou and Kwabena A. Boahen, Translinear Circuits in Subthreshold MOS Analog Integrated Circuits and Signal Processing, Vol. 9, 1996, pp .141-166 Eric A.Vittoz, Weak Inversion In Analog And Digital Circuits CCCD Workshop 2003, Lund, Oct. 2-3 Wai-Chi Fang et al, A VLSI Neural Processor for Image Data Compression using SelfOrganisation Networks IEEE Transactions on Neural Networks, Vol. 3, No. 3, May 1992, pp. 506-517 Chung-Yu, Chiu-Hung Cheng, A Learnable Cellular Neural Network Structure With Ratio Memory For Image Processing, IEEE Transaction on Circuits and Systems-1, Vol. 49, No. 12, December 2002, pp. 1713-1723 Valeriu Beiu, Jose M. Quintana and Maria J. Avedillo, VLSI Implementation of Threshold Logic A comprehensive Survey, IEEE Transactions on Neural Networks, Vol. 14, No. 5, September 2003, pp. 1217-1243

S-ar putea să vă placă și