Documente Academic
Documente Profesional
Documente Cultură
Introduction
LNS Basics
LNS & Power dissipation
Conclusion
Power dissipation has evolved into an instrumental design
optimization objective due to growing demand for portable
electronics equipment as well as due to excessive heat
generation in high-performance systems.
The dominant component of power dissipation for well-
designed CMOS circuits is dynamic power dissipation given by
P = a
2
f (a)
Where a is activity factor,
is the switching
capacitance, f is the clock frequency, and Vdd is the
supply voltage.
A variety of design techniques are commonly
employed to reduce the factors of product without
degrading the system.
The reduction of the various factors, which
determine power dissipation is sought at all levels of
design abstraction.
Higher design abstraction levels aim to reduce the
computational load and the number of memory
accesses required to perform a certain task, and to
introduce parallelism and pipelining in the system.
At circuit and process levels, minimal feature size
circuits are preferred capable of operating at
minimal supply voltages, while leakage currents and
device threshold voltages are minimized.
The application of computer arithmetic techniques
namely the LNS and RNS can reduce power
dissipation by minimizing particular factors.
Where LNS and RNS are the two classes of
transformation namely the Logarithmic Number
System and Residue Number System.
The LNS and RNS techniques reduces the data
activity, the strength of the operators, or the number
of operations required to perform certain
computational task.
The particular choice of number system that
is the way numbers are represented in a
digital system can reduce power dissipation.
In particular power dissipation reduction due
to appropriate selection of the number
system stems from
1) The reduction of the number of
operations.
2) The reduction of the strength of
operators.
3) The reduction of the activity of data.
Power dissipation can be reduced by using
low-power arithmetic circuit architectures.
The application of LNS aims reducing the strength of
particular operations and to reduce the switching
activity.
LNS has been employed in the design of low-power
DSP devices, such as a digital hearing aid and to
reduce power dissipation in adaptive filtering
processors.
The LNS maps a linear number X to a triplet as follows
X
(, , = log
) (1)
Where z is a single-bit flag which, when asserted, denotes that
X is zero, s is the sign of X, and b is the base of the logarithmic
representation.
The organization of an LNS word is shown in figure.
The inverse mapping of a logarithmic triple (z, s, x) to a linear
number X is defined by
Mapping (1) is of practical interest because it can simplify
certain arithmetic operations, i.e., it can reduce the
implementation complexity of several operators.
For example, due to the properties of logarithm function, the
multiplication of two linear numbers X =
and =
, is
reduced to the addition of their logarithmic images, x and y.
, ,
: =
(2)
The basic arithmetic operations and their LNS counterparts are
summarized in table.
The zero flag z and the sign flag s are omitted for simplicity.
The table reveals that the complexity of most operations is reduced, the
complexity of LNS addition and subtraction is significant.
In particular, LNS addition requires the computation of the
nonlinear function
= log
(1 +
) (3)
which substantially limits the data word lengths for which LNS
can offer efficient VLSI implementations.
LNS arithmetic example: Let X=3.25, y=6.72 and b=2.Perform
the operations X.Y,X+Y, ,
2
using the LNS. Initially the data
are transferred to the logarithmic domain, as implied by (1):
, = log
2
) = 0,0, = log
2
3.25 = (0,0,1.70044) (4)
,
= log
2
= 0,0, = log
2
6.72 = (0,0,2.74846) (5)
Using the LNS images (4) and (5), the required arithmetic
operations are performed as follows: The logarithmic image z
of the product Z = X.Y is given by
= + = 1.70044 +2.74846 = 4.44890 (6)
As both operands are of the same sign, i.e.,
= 0,
= 0,the
sign of the product is
= 0. Also
1,
= 0.
To retrieve the actual result Z from (6),inverse conversion (2) is
used as follows:
= 1
= 2
4.44890
= 21.83998 (7)
By directly multiplying X and Y it is found that Z = 21.84. The
difference is due to round-off error during the conversion from
linear to the LNS domain.
The calculation of the logarithmic image z of = is
performed as follows:
=
. = . (8)
The actual result is retrieved as follows:
= 2
0.85022
= 1.80278 (9)
The calculation of the logarithmic image z of =
2
can be
done as :
= . = . = . (10)
Again, the actual result is obtained as
= 2
3.40088
= 10.56520 11
The operation of logarithmic addition is rather awkward and its
realization is usually based on a memory look-up table
operation. The logarithmic image z of the sum Z = X+Y is
= max , +log
2
1 +2
min , max ,
(12)
= 2.74846 +log
2
1 + 2
1.04802
(13)
= 3.31759 (14)
The actual value of the sum Z = X+Y, is obtained as
= 2
3.31759
= 9.96999 15
The organization of the realization of an LNS adder is shown in figure
2.
It is noted that in order to implement LNS subtraction, i.e., the
addition of two quantities of opposite sign, a different memory Look-
Up Table (LUT) is required.
The LNS subtraction LUT contains samples of the function
The main complexity of an LNS processor is the implementation of
the LUTs for storing the values of the functions
()
(). A
straight-forward implementation is only feasible for small word
lengths.
A different technique can be used for larger word lengths,
based on the partitioning of a LUT into an assortment of smaller
LUTs.
The particular partitioning becomes possible due to the
nonlinear behavior of the functions
log
( 1 +
) log
(1
)
The functions