Sunteți pe pagina 1din 23

Aravind Aakash

Introduction
LNS Basics
LNS & Power dissipation
Conclusion
Power dissipation has evolved into an instrumental design
optimization objective due to growing demand for portable
electronics equipment as well as due to excessive heat
generation in high-performance systems.
The dominant component of power dissipation for well-
designed CMOS circuits is dynamic power dissipation given by
P = a

2
f (a)
Where a is activity factor,

is the switching
capacitance, f is the clock frequency, and Vdd is the
supply voltage.
A variety of design techniques are commonly
employed to reduce the factors of product without
degrading the system.
The reduction of the various factors, which
determine power dissipation is sought at all levels of
design abstraction.
Higher design abstraction levels aim to reduce the
computational load and the number of memory
accesses required to perform a certain task, and to
introduce parallelism and pipelining in the system.
At circuit and process levels, minimal feature size
circuits are preferred capable of operating at
minimal supply voltages, while leakage currents and
device threshold voltages are minimized.
The application of computer arithmetic techniques
namely the LNS and RNS can reduce power
dissipation by minimizing particular factors.
Where LNS and RNS are the two classes of
transformation namely the Logarithmic Number
System and Residue Number System.
The LNS and RNS techniques reduces the data
activity, the strength of the operators, or the number
of operations required to perform certain
computational task.
The particular choice of number system that
is the way numbers are represented in a
digital system can reduce power dissipation.
In particular power dissipation reduction due
to appropriate selection of the number
system stems from
1) The reduction of the number of
operations.
2) The reduction of the strength of
operators.
3) The reduction of the activity of data.
Power dissipation can be reduced by using
low-power arithmetic circuit architectures.
The application of LNS aims reducing the strength of
particular operations and to reduce the switching
activity.
LNS has been employed in the design of low-power
DSP devices, such as a digital hearing aid and to
reduce power dissipation in adaptive filtering
processors.
The LNS maps a linear number X to a triplet as follows
X

(, , = log

) (1)
Where z is a single-bit flag which, when asserted, denotes that
X is zero, s is the sign of X, and b is the base of the logarithmic
representation.
The organization of an LNS word is shown in figure.
The inverse mapping of a logarithmic triple (z, s, x) to a linear
number X is defined by
Mapping (1) is of practical interest because it can simplify
certain arithmetic operations, i.e., it can reduce the
implementation complexity of several operators.
For example, due to the properties of logarithm function, the
multiplication of two linear numbers X =

and =

, is
reduced to the addition of their logarithmic images, x and y.

, ,

: =

(2)
The basic arithmetic operations and their LNS counterparts are
summarized in table.
The zero flag z and the sign flag s are omitted for simplicity.
The table reveals that the complexity of most operations is reduced, the
complexity of LNS addition and subtraction is significant.
In particular, LNS addition requires the computation of the
nonlinear function

= log

(1 +

) (3)
which substantially limits the data word lengths for which LNS
can offer efficient VLSI implementations.
LNS arithmetic example: Let X=3.25, y=6.72 and b=2.Perform
the operations X.Y,X+Y, ,
2
using the LNS. Initially the data
are transferred to the logarithmic domain, as implied by (1):

, = log
2
) = 0,0, = log
2
3.25 = (0,0,1.70044) (4)

,
= log
2
= 0,0, = log
2
6.72 = (0,0,2.74846) (5)
Using the LNS images (4) and (5), the required arithmetic
operations are performed as follows: The logarithmic image z
of the product Z = X.Y is given by
= + = 1.70044 +2.74846 = 4.44890 (6)
As both operands are of the same sign, i.e.,

= 0,

= 0,the
sign of the product is

= 0. Also

1,

1since, and the


result is non-zero, i.e.,

= 0.
To retrieve the actual result Z from (6),inverse conversion (2) is
used as follows:
= 1

= 2
4.44890
= 21.83998 (7)
By directly multiplying X and Y it is found that Z = 21.84. The
difference is due to round-off error during the conversion from
linear to the LNS domain.
The calculation of the logarithmic image z of = is
performed as follows:
=

. = . (8)
The actual result is retrieved as follows:
= 2
0.85022
= 1.80278 (9)
The calculation of the logarithmic image z of =
2
can be
done as :
= . = . = . (10)
Again, the actual result is obtained as
= 2
3.40088
= 10.56520 11
The operation of logarithmic addition is rather awkward and its
realization is usually based on a memory look-up table
operation. The logarithmic image z of the sum Z = X+Y is
= max , +log
2
1 +2
min , max ,
(12)
= 2.74846 +log
2
1 + 2
1.04802
(13)
= 3.31759 (14)
The actual value of the sum Z = X+Y, is obtained as
= 2
3.31759
= 9.96999 15
The organization of the realization of an LNS adder is shown in figure
2.
It is noted that in order to implement LNS subtraction, i.e., the
addition of two quantities of opposite sign, a different memory Look-
Up Table (LUT) is required.
The LNS subtraction LUT contains samples of the function
The main complexity of an LNS processor is the implementation of
the LUTs for storing the values of the functions

()

(). A
straight-forward implementation is only feasible for small word
lengths.
A different technique can be used for larger word lengths,
based on the partitioning of a LUT into an assortment of smaller
LUTs.
The particular partitioning becomes possible due to the
nonlinear behavior of the functions
log

( 1 +

) log

(1

)
The functions

, the approximation of which is


required for LNS addition and subtraction
Figure 3
By exploiting the different minimal word length required by
groups of function samples, the overall size of the LUT is
compressed, leading to the organization of figure 4.
In order to utilize the benefits of LNS , a conversion overhead is
required in most cases, to perform the forward LNS mapping
defined by eqn (1).
Since all arithmetic operations can be performed in the
logarithmic domain , only an initial conversion is imposed;
therefore, as the amount of processing implemented in LNS
grows, the contribution of the conversion overhead to power
dissipation and to area-time complexity becomes negligible
since it remains constant.
In particular, the LNS forward and inverse mapping overhead
can be mitigated by employing logarithmic A/D and D/A
converters instead of linear converters, followed by digital
conversion circuitry.
LNS is applicable for low-power design because it
reduces the strength of certain arithmetic operators
and the bit activity.
The operator strength reduction by LNS reduces the
switching capacitance, i.e., it reduces the CL factor of
eqn (a).
A performance comparison of the various
implementations, reveals that LNS offers accuracy
comparable to floating-point, but only at a fraction of
switched capacitance per iteration of the algorithm.
LNS can affect power dissipation in an additional
way, the bit activity, i.e., the a factor of eqn (a).
A design parameter, which is often neglected, although it plays
a key role in an LNS-based processor performance, is the base
of the logarithm, b as demonstrated in the following figure (5).
fig(5). Probability P01 per bit for 2s complement & LNS encoding for
p=-0.99
The choice of base has a substantial impact on the average bit
activity.
Figure (5) shows activity per bit position, i.e., the
probability of a transition from low to high in a
particular bit position, for a twos complement word
and several LNS words, each of a different base b.
Departing from the traditional choice b=2 can
substantially reduce the signal activity in
comparison to the twos-complement representation.
Since multiplication-additions are important in DSP
applications, the power requirements of an LNS and a
linear fixed-point adder-multiplier have been
compared.
The results show that two times reduction in power
dissipation is possible for operations with word size
of 8 to 14 bits.
Given a sufficient number of multiplication-
additions, the LNS implementation becomes more
efficient from the low-power dissipation viewpoint,
even when a constant conversion overhead is taken
into consideration.

S-ar putea să vă placă și