Sunteți pe pagina 1din 50

Department of Computer Sc. & Engg.

Seminar Report

Topic : Reversible Logic Gates & circuits


presented on : 26th February,2007

presented by :

Indranil Nandy

MTech(CS),2007
Roll : 06CS6010
e-mail : hi_i_am_indranil@yahoo.com

under the guidance of :

Prof. Indranil Sen Gupta

Head, School of Information Technology


Indian Institute of Technology
Contents
Page Number
1. Introductory Chapter

Introduction……………………………………..3
How all it came…………………………………4

2. Reversible Gates and Circuits: Detailed Analysis

Background……………………………………..6
Definitions……………………………………....6
How to represent a reversible circuit truth table?
How to encode a reversible circuit?
Temporary Storage
Representing Circuits by graphs
Discussion……………………………………..11
Some special types of reversible gates………...12

3. A Family of Logical Fault Models for Reversible


Circuits

Introduction……………………….…………...14
Trapped-Ion Technology.....……....…………..15
Fault Models……………………….…………..16
Single Missing-Gate Fault Model
Repeated-Gate Fault Model
Multiple Missing-Gate Fault Model
Partial Missing-Gate Fault Model

4. Testable Reversible Gates


Introduction…………………………………….20
Gate R
Gates R1 & R2
Reversible Gates With a Built-in Testability…..23
Two-pair rail checker…………………………...23
Synthesis of the reversible logic circuits……….25
CMOS realization of the proposed reversible
logic gates……………………………………....27
Estimation of Power……………………………30

2
Conclusion……………………………………...31

5. Reversible Memory Elements

Introduction…………………………………….32
Addressing the problem of fan-out……………..33
Constructing a new reversible RS-latch………...34
Reversible clocked flip-flops…………………...35
Master-Slave flip-flop
D flip-flop
JK flip-flop
T flip-flop
Discussion…………………………..,..………..37

6. Quantum Search Applications

What is quantum computing?.............................38


Quantum Computation &
Reversible Computation………………………..38
Quantum Computing : Bits and Qubits…….…...
38
Quantum Search…………………………...…...39

 Appendix A
Irreversibility and Heat Generation…………….43

 References…………………………………………..44

3
Chapter 1
Introductory Chapter

 Introduction:
In most computing tasks, the number of output bits is relatively small compared to the
number of input bits. For example, in a decision problem, the output is only one bit (yes or
no) and the input can be as large as desired. However, computational tasks in digital signal
processing, communication, computer graphics, and cryptography require that all of the
information encoded in the input be preserved in the output. Some of those tasks are
important enough to justify adding new microprocessor instructions to the HP PA-RISC
(MAX and MAX-2), Sun SPARC (VIS), PowerPC (AltiVec), IA-32 and IA-64 (MMX)
instruction sets In particular, new bit-permutation instructions were shown to vastly improve
performance of several standard algorithms, including matrix transposition and DES, as well
as two recent cryptographic algorithms Twofish and Serpent. Bit permutations are a special
case of reversible functions, that is, functions that permute the set of possible input values.
For example, the butterfly operation (x,y) → (x+y,x−y) is reversible but is not a bit
permutation. It is a key element of Fast Fourier Transform algorithms and has been used in
application-specific Xtensa processors from Tensilica. One might expect to get further speed-
ups by adding instructions to allow computation of an arbitrary reversible function. The
problem of chaining such instructions together provides one motivation for studying
reversible computation and reversible logic circuits, that is, logic circuits composed of gates
computing reversible functions.

Reversible circuits are also interesting because the loss of information associated with
irreversibility implies energy loss. Younis and Knight showed that some reversible circuits
can be made asymptotically energy-lossless as their delay is allowed to grow arbitrarily large.
[Excerpt from "Asymptoticay Zero Energy Split-Level Charge Recovery Logic" :
Younis & Knight :
Power dissipation in conventional CMOS primarily occurs during device
switching. One component of this dissipation is due to charging and discharging the gate
capacitances through conducting, but slightly resistive, devices. We note here that it is
not the charging or the discharging of the gate that is necessarily dissipative, but rather
that a small time is allocated to perform these operations. In conventional CMOS, the
time constant associated with charging the gate through a similar transistor is RC,
where R is the ON resistance of the device and C its capacitance. However, the cycle
time can be, and usually is, much longer than RC. An obvious conclusion is that energy
consumption can be reduced by spreading the transitions over the whole cycle rather
than "squeezing" it all inside one RC.
To successfully spread the transition over periods longer than RC, we insist that
two conditions apply throughout the operation of our circuit. Firstly, we forbid any
device in our circuit from turning ON while a potential difference exists across it.
Secondly, once the device is switched ON, the energy transfer through the device occurs
in a controlled and gradual manner to prevent a potential from developing across it.
These conditions place some interesting restrictions on the way we usually perform
computations. To perform a non-dissipative transition of the output, we must know the
state of the output prior to and during this output transition. Stated more clearly, to
non-dissipatively reset the state of the output we must at all times have a copy of it. The
only way out of this circle is to use reversible logic. It is this observation that is the core
of our low energy charge recovery logic.]

4
Currently, energy losses due to irreversibility are dwarfed by the overall power
dissipation, but this may change if power dissipation improves. In particular, reversibility is
important for nanotechnologies where switching devices with gain are difficult to build.

Finally, reversible circuits can be viewed as a special case of quantum circuits


because quantum evolution must be reversible. Classical (non-quantum) reversible gates are
subject to the same “circuit rules,” whether they operate on classical bits or quantum states. In
fact, popular universal gate libraries for quantum computation often contain as subsets
universal gate libraries for classical reversible computation. While the speed-ups which make
quantum computing attractive are not available without purely quantum gates, logic synthesis
for classical reversible circuits is a first step toward synthesis of quantum circuits. Moreover,
algorithms for quantum communications and cryptography often do not have classical
counterparts because they act on quantum states, even if their action in a given computational
basis corresponds to classical reversible functions on bit-strings.

Quantum circuits require complete reversibility. Quantum circuits and


algorithms offer additional benefits in terms of asymptotic runtime. While
purely quantum gates are necessary to achieve quantum speed-up,
variants of conventional reversible gates are also commonly used in
quantum algorithms. For example, the textbook implementation of
Grover's quantum search algorithm uses many NCT (NOT, CNOT, and
TOFFOLI) gates. Hence, efficient synthesis with such gates is an important
step toward quantum computation. Toffoli showed that the NCT gate
library is universal for the synthesis of reversible boolean circuits. This has
been recently extended to show that all even permutations can be
synthesized with no temporary storage lines, and that odd permutations
require exactly one extra line. Optimal circuits for all three-bit reversible
functions can be found in several minutes by dynamic programming. This
algorithm also synthesizes optimal fourbit circuits reasonably quickly, but
does not scale much further. More scalable constructive synthesis
algorithms tend to produce suboptimal circuits even on three bits, which
suggests iterative optimization based on local search.

 How All It Came?


Question :What will be the difficulties when we will try to build
classical computers (Turing machines) on the atomic scale?
Answer : One of the toughest problems to scale down computers is
the dissipated heat that is difficult to remove.

Physical limitations placed on computation by heat dissipation were


studied for many years [3]. The usual digital computer program frequently performs
operations that seem to throw away information about the computer's history, leaving the
machine in a state whose immediate predecessor is ambiguous. Such operations include
erasure or overwriting of data, and entry into a portion of the program addressed by several
different transfer instructions. In other words, the typical computer is logically irreversible -
its transition function (the partial function that maps each whole-machine state onto its
successor, if the state has a successor) lacks a single-valued inverse.
Landauer [ 3 ] has posed the question of whether logical irreversibility is an
unavoidable feature of useful computers, arguing that it is, and has demonstrated the physical
and philosophical importance of this question by showing that whenever a physical computer
throws away information about its previous state it must generate a corresponding amount of
entropy. Therefore, a computer must dissipate at least kTln2 of energy (about 3 X 10-21 joule
at room temperature) for each bit of information it erases or otherwise throws away.

5
In his classic 1961 paper [3, Appendix A], Rolf Landauer attempted
to apply thermodynamic reasoning to digital computers. Paralleling the
fruitful distinction in statistical physics between macroscopic and
microscopic degrees of freedom, he noted that some of a computer’s
degrees of freedom are used to encode the logical state of the
computation, and these ”information bearing” degrees of freedom (IBDF)
are by design sufficiently robust that, within limits, the computer’s logical
(i.e. digital) state evolves deterministically as a function of its initial value,
regardless of small fluctuations or variations in the environment or in the
computer’s other non-information-bearing degrees of freedom (NIBDF).
While a computer as a whole (including its power supply and other parts of
its environment), may be viewed as a closed system obeying reversible
laws of motion (Hamiltonian or, more properly for a quantum system,
unitary dynamics), Landauer noted that the logical state often evolves
irreversibly, with two or more distinct logical states having a single logical
successor.
Therefore, because Hamiltonian/unitary dynamics conserves (fine-
grained) entropy, the entropy decrease of the IBDF during a logically
irreversible operation must be compensated by an equal or greater
entropy increase in the NIBDF and environment.
This is Landauer’s principle. Typically the entropy increase takes the
form of energy imported into the computer, converted to heat, and
dissipated into the environment. So, what is the solution?
At this juncture Bennett[4] showed : An irreversible computer can always be made
reversible by having it save all the information it would otherwise throw away. For example,
the machine might be given an extra tape (initially blank) on which it could record each
operation as it was being performed, in sufficient detail that the preceding state would be
uniquely determined by the present state and the last record on the tape. However, as
Landauer pointed out, this would merely postpone the problem of throwing away unwanted
information, since the tape would have to be erased before it could be reused. It is therefore
reasonable to demand of a useful reversible computer that, if it halts, it should have erased all
its intermediate results, leaving behind only the desired output and the originally furnished
input. (The machine must be allowed to save its input-otherwise it could not be reversible and
still carry out computations in which the input was not uniquely determined by the output.)
General-purpose reversible computers (Turing machines) satisfying these requirements indeed
exist, and they need not be much more complicated than the irreversible computers on which
they are patterned. Computations on a reversible computer take about twice as many steps as
on an ordinary one and may require a large amount of temporary storage.
At this time Tomasso Toffoli (1980) showed that there exists a
reversible gate which could play a role of a universal gate for reversible
circuits. These two simultaneously have lead to the exploration in the field
of Reversible Logic Gates and Circuits.
In recent years, reversible computing system design is attracting a
lot of attention.
Reversible computing is based on two concepts: logic reversibility and
physical reversibility. A computational operation is said to be logically
reversible if the logical state of the computational device before the
operation of the device can be determined by its state after the operation
i.e., the input of the system can be retrieved from the output obtained
from it. Irreversible erasure of a bit in a system leads to generation of
energy in the form of heat. An operation is said to be physically reversible
if it converts no energy to heat and produces no entropy. Landauer has
shown that for every bit of information lost in logic computations that are
not reversible, kTlog2 joules of heat energy is generated, where k is

6
Boltzmann’s constant and T the absolute temperature at which
computation is performed. The amount of energy dissipation in a system
increases in direct proportion to the number of bits that are erased during
computation. Bennett showed that kTln2 energy dissipation would not
occur, if a computation were carried out in a reversible way. Reversible
computation in a system can be performed if the system is composed of
reversible gates.
Two conditions must be satisfied for reversible computation

The first Condition :


for any deterministic device to be reversible its input and output must
be uniquely retrievable from each other.
- this is called logical reversibility.

The second Condition :


the device can actually run backwards, i.e., in another term it can be
said that each operation converts no energy to heat and produces no
entropy.
- this is called physical reversibility.
- Second Law of Thermodynamics guarantees that no heat is
dissipated.

7
Chapter 2
Reversible Gates and Circuits: Details
Analysis

 Background :
In conventional (irreversible) circuit synthesis, one typically starts with a universal
gate library and some specification of a Boolean function. The goal is to find a logic circuit
that implements the Boolean function and minimizes a given cost metric, e.g., the number of
gates or the circuit depth. At a high level, reversible circuit synthesis is just a special case in
which no fanout is allowed and all gates must be reversible.

 Definitions:
Definition 1: A gate is reversible if the (Boolean) function it computes is bijective.

If arbitrary signals are allowed on the inputs, a necessary condition for reversibility is
that the gate have the same number of input and output wires. If it has k input and output
wires, it is called a k×k gate, or a gate on k wires. We will think of the mth input wire and the
mth output wire as really being the same wire. Many gates satisfying these conditions have
been examined in the literature. We will consider a specific set defined by Toffoli.

Definition 2 A k-CNOT is a (k+1)×(k+1) gate. It leaves the first k inputs


unchanged, and inverts the last iff all others are 1. The unchanged lines are
referred to as control lines.

Clearly the k-CNOT gates are all reversible. The first three of these have special
names. The 0- CNOT is just an inverter or NOT gate, and is denoted by N. It performs the
operation (x)→(x XOR 1). The 1-CNOT, which performs the operation (y,x)→(y,x XOR y)
is referred to as a Controlled-NOT, or CNOT (C). The 2-CNOT is normally called a
TOFFOLI (T) gate, and performs the operation (z,y,x)→(z,y,x XOR yz). We will also be
using another reversible gate, called the SWAP (S) gate. It is a 2×2 gate which exchanges the
inputs; that is, (x,y)→(y,x). One reason for choosing these particular gates is that they appear
often in the quantum computing context, where no physical “wires” exist, and swapping two
values requires non-trivial effort. We will be working with circuits from a given, limited-gate
library. Usually, this will be the CNTS gate library, consisting of the CNOT, NOT, and
TOFFOLI, and SWAP gates.

Definition 3 A well-formed reversible logic circuit is an acyclic combinational logic


circuit in which all gates are reversible, and are interconnected without fanout.

As with reversible gates, a reversible circuit has the same number of input and output
wires; again we will call a reversible circuit with n inputs an n × n circuit or a circuit on n
wires. We draw reversible circuits as arrays of horizontal lines representing wires. Gates are
represented by vertically-oriented symbols. For example, in the following Figure, we see a
reversible circuit drawn in the notation introduced by Feynman. The symbols inverters
and the • symbols represent controls. A vertical line connecting a control to an inverter means

8
that the inverter is only applied if the wire on which the control is set carries a 1 signal. Thus,
the gates used are, from left to right, TOFFOLI, NOT, TOFFOLI, and NOT.

Figure 1

How to represent a reversible circuit truth table?


Since we will be dealing only with bijective functions, i.e., permutations, we
represent them using the cycle notation where a permutation is represented by disjoint cycles
of variables. For example, the truth table in Figure 2 is represented by (2,3)(6,7) because the
corresponding function swaps 010 (2) and 011 (3), and 110 (6) and 111 (7). The set of all
permutations of n indices is denoted Sn, so the set of bijective functions with n binary inputs
n
is S2 . We will call (2,3)(6,7) CNT-constructible since it can be computed by a circuit with
gates from the CNT gate library.
Let us take another example. Here is the figure of a Toffoli's Gate
and its corresponding truth table.

Figure 2: Toffoli's Gate

a, b and c are the three inputs to the gate and the corresponding
output lines are X, Y and Z respectively. The corresponding functions
computed are as following :

9
X=a
Y=b
Z = c XOR ab

a b c x y z

0 0 0 0 0 0

0 0 1 0 0 1

0 1 0 0 1 0

0 1 1 0 1 1

1 0 0 1 0 0

1 0 1 1 0 1

1 1 0 1 1 1

1 1 1 1 1 0

Truth Table for Toffoli's Gate

From the truth table of Toffoli’s gate it is clear that it is a reversible


gate. There exists a one-to-one mapping from the input vectors to the
output vectors. More over, the output is balanced. Here the truth table can
be represented by (6,7) only. Another example as follows:
Consider the following truth table of any reversible circuit.

Truth Table of an arbitrary Reversible Circuit

From the previous discussion, we can represent the truth table as


the following disjoint cycle of variables : (2,4),(3,6,5).

10
How to encode a reversible circuit?
In electronic design automation, it is common to represent logic
circuits as graphs or hypergraphs. However, the regularity and ordering
intrinsic to reversible circuits facilitate a more compact array-based
representation (encoding) that is also more convenient. In conventional
circuit representations, all connections between individual gates are
enumerated, and each gate stores indices of its incident connections.
However, in a reversible circuit, one can distinguish a small number of
wires going all the way through the circuit. In our encoding of a reversible
circuit, those indices are maintained at individual gates, and the gates are
stored in an array in an arbitrarily chosen topological order. Overlaid on
this array is a redundant adjacency data structure (a graph) that allows
one to look up the neighbors of a given gate. This representation is
faithful; it is also convenient because each range in the array represents a
valid sub-circuit. However, not every valid sub-circuit is represented by a
range. In particular, any set of gates in a circuit that form an anti-chain
(with respect to the partial ordering) will be ordered, obscuring the fact
that any subset is a valid subcircuit.

Let us encode the following reversible circuit as follows :

Figure 3

To encode the circuit in the above figure, we number wires top-down from 0 to 3.
Then the gates can be written as following : T(2,3;1)T(0,1;3)C(3;2)C(1;2)C(3;0)T(0,2;1)N(2)

Definition 4. Let L be a (reversible) gate library. An L-circuit is a circuit composed


only of gates from L. A permutation π 2 S2n is L-constructible if it can be
computed by an n×n L-circuit.

Following figure 4a indicates that the circuit in Figure 1 is equivalent to one


consisting of a single C gate. Pairs of circuits computing the same function are very useful,
since we can substitute one for the other.

11
Figure 4

On the right, we see similarly that three C gates can be used to replace the S gate
appearing in the middle circuit of Figure 4b. If allowed by the physical implementation, the S
gate may itself be replaced with a wire swap. This, however, is not possible in some forms of
quantum computation. Figure 4 therefore shows us that the C and S gates in the CNTS gate
library can be removed without losing computational power. We will still use the CNTS gate
library in synthesis to reduce gate counts and potentially speed up synthesis. This is motivated
by Figure 4, which shows how to replace four gates with one C gate, and thus up to 12 gates
with one S gate.

Temporary Storage:

Figure 5

Figure 5 illustrates the meaning of “temporary storage”. The top n−k lines transfer
n−k signals, collectively designated Y, to the corresponding wires on the other side of the
circuit. The signals Y are arbitrary, in the sense that the circuit K must assume nothing about
them to make its computation. Therefore, the output on the bottom k wires must be only a
function of their input values X and not of the “ancilla” bits Y, hence the bottom output is
denoted f (X). While the signals Y must leave the circuit holding the same values they entered
it with, their values may be changed during the computation as long as they are restored by
the end. These wires usually serve as an essential workspace for computing f (X). An example
of this can be found in Figure 4a: the C gate on the right needs two wires, but if we simulate it
with two N gates and two T gates, we need a third wire. The signal applied to the top wire
emerges unaltered.

12
Definition 5. Let L be a reversible gate library. Then L is universal if for all k and
all permutations π Є S2K , there exists some l such that some L-constructible
circuit computes π using l wires of temporary storage.

The concept of universality differs in the reversible and irreversible cases in two
important ways. First, we do not allow ourselves access to constant signals during the
computation, and second, we synthesize whole permutations rather than just functions with
one output bit.

Representing Circuits by Graphs: One can model gates in a reversible


circuit by vertices, and wires by directed edges (more than one edge may
connect two vertices). If the gates are reversible, each vertex must have
as many edges entering as leaving. Since no feedback is allowed, such
graphs must be acyclic. The graph of a reversible circuit can be viewed as
a partial ordering of gates: for gates G; H in C, we say G >C H (or
equivalently H <C G) if there exists a non-trivial path from H to G in the
graph representing C. When C is clear from the context, we write G > H.

 Discussion:

Some of the major problems with reversible logic synthesis are :


i) Fan-outs are not allowed
ii) Feedback from gate outputs to inputs are not permitted

A logic synthesis technique using reversible gate should have


the following features:
i) Use minimum number of garbage outputs
ii) Use minimum input constants
iii) Keep the length of cascading gates minimum
iv) Use minimum number of gates

13
Some special types of Reversible Gates:

K-CNOT Gate:
K=0:
The 0-CNOT is just an inverter or NOT gate, and is denoted by N.
It performs the operation (x) → (x XOR 1).

K=1:
The 1-CNOT, performs the operation (y,x) →(y,x XOR y).
It is referred as a controlled-NOT or CNOT or C.

K=2:
The 2-CNOT is normally called a TOFFOLI (T) gate.
It performs the operation (z,y,x) → (z,y,x XOR yz).

SWAP Gate :
We will also be using another reversible gate, called the SWAP (S) gate.
It is a 2×2 gate which exchanges the inputs; that is, (x,y) → (y,x).

Toffoli's Gate:
In Toffoli Gate, all the inputs from 1 to (n-1) are passed as outputs.
The nth output is controlled by 1 to (n-1) inputs. When all the inputs from
1 to (n-1) are 1s, the nth input is inverted and passed as output else
original signal is passed. A 3-input, 3-output Toffoli gate is shown in Fig 6.

Figure 6: 3x3 Toffoli's Gate

The inputs ‘a’ and ‘b’ are passed as first and second output respectively.
The third output is controlled by ‘a’ and ‘b’ to invert ‘c’. The truth table has
been shown before.

Fredkin's Gate:
The Fredkin gate is shown in the Fig 7. Here the input ‘a’ is passed
as first output. Inputs b and c are swapped to get the second and third
output which is controlled by ‘a’. Thus two inputs can be swapped by
controlling the swap using another input in Fredkin Gate.

14
Figure 7: 3x3 Fredkin's Gate

The truth table is as follows :

a b c x y z

0 0 0 0 0 0

0 0 1 0 1 0

0 1 0 0 0 1

0 1 1 0 1 1

1 0 0 1 0 0

1 0 1 1 0 1

1 1 0 1 1 0

1 1 1 1 1 1

Truth Table for a 3x3 Fredkin's Gate

The functions computed by the outputs of Fredkin’s Gate can be


interpreted as follows :
X=a
Y = if a then c else b
Z = if a then b else c
Every boolean function can be interpreted by 3x3 Fredkin’s gate
(shown in Figure 8).

15
Figure 8

Chapter 3
A Family of Logical Fault Models for
Reversible Circuits
 Introduction:
The reversibility of computation has long been studied as a means to reduce or even
eliminate the power consumed by computation. Of particular interest is a type of reversible
computing known as quantum computing, which fundamentally changes the nature of
computation by basing it on quantum mechanics rather than classical physics.

Many physical implementations of quantum circuits have been suggested, although a


practical quantum computer has yet to be built. Quantum state representations include photon
polarization and electron spin. Such states are fragile and error-prone due to their nanoscale
dimensions, extremely low energy levels, and tendency to interact with the environment
(decoherence). Hence, it is expected that efficient testing and fault-tolerant design methods
will be essential for the successful implementation of quantum circuits. Because of the
complexity of their normal and faulty behavior modes, the testing problems posed by general
quantum circuits are very challenging.

Since all its gates are reversible, each group of gates in a reversible circuit is also
reversible. Hence, any arbitrary state can be justified on each gate. For example, if the values
(1111) must be applied to the rightmost 3-CNOT gate of Figure 9 for test purposes, there is a
unique input vector that is easily obtained by backward simulation (1101 in this case).
Furthermore, propagation of a fault effect is trivial: if a logic 1 value is replaced by a logic 0
(or vice versa) due to a fault, this will result in a different value at the output of the circuit.

16
Figure 9

 Trapped-Ion Technology:
Nielsen and Chuang cite four abilities of a technology as necessary for quantum
computation. The technology must:
(1) robustly represent quantum information;
(2) perform a universal set of unitary transformations;
(3) prepare accurate initial states; and
(4) measure the output results.
In the following, we will briefly review how these four issues are addressed in the trapped-ion
technology.

Qubit representation: The internal state of an ion serves as the qubit representation; the
ground state (|g>) represents |0>, while the excited state (|e0>) represents |1>. In trapped-ion
technology, ions are confined in an ion trap, i.e. between electrodes, some of which are
grounded (have a static potential) while others are driven by a fast oscillating voltage. The
Los Alamos group used the Ca+ ions with 42S1/2 as the ground state and 32D5/2 as the excited
state.

.
Unitary transformations: These operations rotate state vectors without changing their
length, which implies reversibility. In the trapped-ion technology, ions interact with laser
pulses of certain duration and frequency. Qubits interact via a shared phonon (quantum of
vibrational energy) state. CNOT functionality has been experimentally demonstrated for the
trapped-ion technology (as well as for NMR technology).

17
Initialization: Trapped ions are brought into their motional ground state |00 . . . 0> using
Doppler and sideband cooling.

Measurement: The state of a single ion is determined by exciting an ion via a laser pulse and
measuring the resulting fluorescence.

Figure 10 illustrates the gate implementation in trapped-ion technology required for the
circuit in Figure 9. The circuit has four wires a, b, c and d, so four ions (qubits) are used.
Since the input vector is (1010), the qubits a and c are set to the state |1_ and the qubits b and
d are set to |0> in the beginning.

The leftmost (2-CNOT) gate is implemented by a laser pulse (or a sequence thereof)
applied to qubits a, b and c. Their interaction results in the state of qubit b being changed
from |0> to |1>. This is shown in the upper right of Figure 10. Similarly, the second (1-
CNOT) gate corresponds to a pulse that changes the state of qubit c from |1> to |0>. The third
gate does not result in a state change on the target qubit d due to a logic-0 value at one of its
control inputs (c); consequently, the third pulse does not change the state of qubit d.

Figure 10

 Fault Models
Next we introduce several fault models that are mainly motivated by the ion-trap
quantum computing technologies discussed in the preceding section. The basic assumptions
are that qubits are represented by the ion state, gates correspond to external pulses which

18
control the interactions of the qubits, and the gate operations are error prone. The fault models
proposed are the single missing gate fault (SMGF), the repeated-gate fault (RGF), the
multiple missing gate fault (MMGF) and the partial missing-gate fault (PMGF) models.

Single Missing-Gate Fault Model


A single missing-gate fault (SMGF) corresponds to the missing-gate fault. It is
defined as a complete disappearance of one CNOT gate from the circuit. The physical
justification for a SMGF is that the pulse(s) implementing the gate operation is (are) short,
missing, misaligned or mistuned. Figure 11 shows the circuit from Figure 9 with an SMGF:
the first (2-CNOT) gate is missing. The resulting changes in logical values are shown in the
format “faultfree value/faulty value”. It can be seen that the fault effect is observable on wires
b and c. The right part of Figure 11 suggests how the pulse corresponding to the first gate is
too weak to change the value on qubit b from |0_ to |1_. The detection condition for an MGF
is that a logic 1 value be applied to all the control inputs of the gate in question; the values on
the target input as well as the values on the wires not connected to the gate are arbitrary. The
number of possible SMGFs is equal to the number of gates in the circuit. The followings are
the characterization of SMGFs:.

Figure 11

Theorem 1 (Properties of SMGFs) Consider a reversible circuit consisting of N CNOT gates.


1. There is always a complete SMGF test set of _N/2_ or fewer vectors.
2. There are circuits for which the minimal complete SMGF test set has exactly _N/2_
vectors.
3. By adding one extra wire and several 1-CNOT gates, every circuit can be
transformed such that the resulting circuit retains its original functionality but has a complete
SMGF test set consisting of one test vector. The transformation can be done for any test
vector, but there is a unique test vector leading to minimal overhead (number of required
extra 1-CNOT gates). SMGFs corresponding to the added gates are also covered by that test
vector

Repeated-Gate Fault Model


A repeated-gate fault (RGF) is an unwanted replacement of a CNOT gate by several
instances of the same gate. The physical justification for an RGF is the occurrence of long
or duplicated pulses. Figure 12 (left) shows the circuit from Figure 9 with a duplicated first
gate. It can be seen that the fault effect is identical to that of the SMGF (Figure 11). Figure 12
(right) illustrates the double transition on qubit b first from |0> to 1|>, and then back to |0>
due to a long or duplicated pulse. As a generalization, the following theorem holds:

19
Figure 12

Theorem 2 (Properties of RGFs) Consider an RGF that replaces a gate by k instances of the
same gate.
1. If k is even, the effect of the RGF is identical to the effect of the SMGF with respect
to the same gate.
2. If k is odd, the fault is redundant, i.e., it does not change the function of the circuit.

Multiple Missing-Gate Fault Model


This model assumes that gate operations are disturbed for several consecutive cycles,
so that several consecutive gates are missing from a circuit. An example involving two
missing gates is shown in Figure 13 (left). Note that the MMGF definition does not match our
usual understanding of a multiple fault, which implies that several distinct single faults are
present in the same time. We also restrict multiple faults to one or more consecutive gates.
Hence for the circuit from Figure 13 (left), removing the middle and the rightmost gate yields
a valid MMGF, but removing the leftmost and the rightmost gates does not. This fault model
is justified by the assumption that the laser implementing gate operations is more likely to be
disturbed for a period of time exceeding one gate operations than to be disturbed for a short
time, then perform error-free, and then be disturbed again. Clearly, SMGFs are a subset of the
MMGFs. In an N-gate circuit, the number of possible MMGFs is N(N + 1)/2, a quadratic
function of N, whereas the corresponding number of multiple SMGFs is exponential in N.

Figure 13
It has been proven for stuck-at faults in reversible circuits that a complete single fault
test set covers all multiple faults. This is not true for SMGFs and MMGFs, however, despite
the restriction that the missing gates must be consecutive. This is demonstrated by the two-
gate circuit fragment shown in Figure 13 (right). The SMGF corresponding to the left (3-
CNOT) gate requires the test vector (111X) for detection, where X stands for “don’t care”.
The SMGF for the second (2-CNOT) gate requires (X11X), so the optimal SMGF test set
consists of one test vector, e.g., (1110). However, this vector does not detect the MMGF
defined by removal of both gates, although it is a complete SMGF test set. The MMGF is not

20
redundant, as vector (011X) detects it. Furthermore, as every SMGF is also an MMGF, the
vector (111X) also must be included in any complete MMGF test set. Hence, the optimal size
of a complete MMGF test set is two. We have seen above that the size of the optimal test set
for SMGFs is one. Hence, a complete SMGF test set does not cover all MMGFs.

Partial Missing-Gate Fault Model


A partial missing-gate fault is a result of partially misaligned or mistuned gate pulses.
It turns a k-CNOT gate into a k'-CNOT gate, with k' < k. We call k − k' the order of a PMGF.
Figure 14 shows a first-order PMGF affecting the third control input of the rightmost gate,
and the weak pulse that fails to make c interact with a, b and d. An SMGF can be seen as a 0-
order PMGF.

Figure 14

Theorem 3 (Properties of PMGFs)


1. A k-CNOT requires k test vectors to detect all firstorder PMGFs and k + 1 vectors
if the SMGFs must also be detected.
2. Anm-order PMGF dominatesmfirst-order PMGFs, i.e. it is detected by any test
vector that detects one of the first-order PMGFs.

21
Chapter 4
Testable Reversible Gates.
 Introduction:
The currently available reversible gates can be used to implement arbitrary logic
functions; however, the testing of such circuits has not been addressed in literature. The
testing of reversible logic gates can be a problem because the levels of logic can be
significantly higher than in standard logic circuits.
Here two reversible logic gates, R1 and R2 that can be used in pairs to design testable
reversible logic circuits, are introduced. The first gate R1 is used for implementing arbitrary
functions while the second gate R2 is employed to incorporate online testability features into
the circuit. Gates R1 and R2 are shown in Figures and the corresponding truth tables of the
gates are shown. A third gate R is also introduced R3 that is used to construct two pair two
rail checker. In the next two sections we will discuss these three gates.

Gate R:
The reversible gate R is shown in Fig 15 and its truth table is shown.
Gate R differs from gates R1 and R2 in gate width. Gate width of R1 and
R2 is 4, while that of R is 3. In other words, R is 3 input-3output reversible
gate, while R1 and R2 are 4 input-4 output reversible gates. The testability
feature is not incorporated in gate R, as it will be used as the basic block
for implementing the two pair two-rail checker.

.
Gate R

22
From the truth table, it can be verified that the input pattern corresponding to a
particular output pattern can be uniquely determined. The new gate can be used both to invert
and duplicate a signal.

Gate R is a universal gate and its universality is shown in Fig. 16. The signal
duplication function can be obtained by setting the input b to 0, as shown in Fig. 16(b). The
EXOR function is available at the output “l” of the new gate. The AND function is obtained
by connecting the input c to 0, the output is obtained at the terminal n, as shown in Fig. 16(c).
The implementation of a NAND gate is shown in Fig. 16(d). An OR gate is realized by
connecting two new reversible gates, as shown in the Fig. 16(e).

Figure 16. (a) New reversible-logic gate R. (b) Signal duplication. (c) AND gate. (d) NAND gate. (e) OR
gate.

Gates R1 and R2 (reversible gates with built-in testability):

Here, we are introducing R1 and R2 that can be used in pairs to design testable
reversible logic circuits. The first gate R1 is used for implementing arbitrary functions while
the second gate R2 is employed to incorporate online testability features into the circuit. Gates
R1 and R2 are shown in Fig. 17(a); and the corresponding truth tables of the gates are shown
in the preceeding tables. From the truth tables, it can be verified that the input pattern
corresponding to a particular output pattern can be uniquely determined.

23
Truth Tables for Gates R1 and R2

24
Figure 17

25
Gate R1 can implement all Boolean functions and during a normal operation, the
input p is set to 0. The OR and the EXOR functions can be simultaneously implemented on
R1 [Fig. 17(b)]. The EXNOR function and the NAND function are obtained by setting input c
to 1 [Fig. 17(c)]. The NOR function can be obtained by cascading two R1 gates [Fig. 17(d)].
An AND gate also requires the cascading of two gates [Fig. 17(e)]. R1 can transfer a signal at
input a to output u by setting the input c to 0.

Gate R2 is used to transfer the input values at d, e, and f to outputs x, y, and z; it also
generates the parity of the input pattern at output s. The output s of the gate is the complement
of the input r if all other inputs of the gate remain unchanged. For example, if input defr =
1000 is changed to 1001, the output of the gate will change from 1000 to 1001. During a
normal operation, the input r is set to 1.

 Reversible Gates With a Built-in Testability:


A testable logic block can be formed by cascading R1 and R2, as shown in Fig. 18. In
this configuration, gate R2 is used to check online whether there is a fault in R1 or in itself. If
R1 is fault free, its parity output q and the parity output s of R2 should be complementary;
otherwise, the presence of a fault is assumed. Thus, during a normal operation, the presence
of a fault in the logic block can be detected.

Figure 18: Testable Block

 Two-pair two-rail checker:


A two-pair two-rail checker is constructed using gate R, as shown in Fig. 19. The two-
pair rail checker is composed of eight R gates. The error checking functions of the two pair
rail checker are as follows:

e1 =x0y1 + y0x1
e2 =x0x1 + y0y1

The fault-free checker will produce the complementary output at e1 and e2 if the
inputs are complementary; otherwise, they will be identical. The block diagram of the testable
block along with the two-pair rail checker is shown in Fig. 20. The outputs q and s of one
testable block forms the input x0 and y0 for the two-pair rail checker, and the outputs of
another testable block forms the input x1 and y1. Thus, the testable blocks constructed using
gates R1 and R2 are tested using the two-pair rail checker. .

26
Figure 19: Two-pair two rail checker

The next figure [Fig. 20] shows the block diagram of the testable block along with the
two-pair rail checker shown in Fig. 19.

27
Figure 20: Testable block embedded with the two-pair two-rail checker

 Synthesis of the reversible logic circuits:

A sum of products (SOP) expression can be synthesized using reversible logic by


converting the SOP expression into a NAND–NAND form. Each testable NAND block is
implemented by cascading gates R1 and R2. Fig. 21 shows the implementation of (ab)'. If a
variable appears more than once in an expression, then a signal duplication gate will be
required. Note that fanouts are not allowed in the reversible-logic design.

Figure 21 : NAND Gate using R1 and R2

The NAND block based implementation of function F = ab+cd is given below:

1) ab = ( (ab)’)’= (1 XOR ab)’, cd = ( (cd)’)’= (1 XOR ab)’


2) ab + cd = ((ab).(cd))’= ((1 XOR ab).(1 XOR cd))’

The implementation of the function is shown in Fig 22. The number of signal
duplication blocks used instead of fan-outs, depends on the number of times the variable
appears in the function. For example, the number of blocks required to implement fan out of
variable that appears 6 times as “a”, and 3 times as its complement “ a’ ” is shown in Fig 23.
Several MCNC Benchmark functions were implemented using the above approach. The
testable gate count, garbage outputs and number of checkers are shown in the following
Table.

28
Figure 22: Reversible NAND block implementation for the function ab+cd

Figure 23: Signal Duplication

29
 CMOS realization of the proposed reversible logic gates:
The transistor level design of the three reversible logic gates are realized in CMOS.
Fig. 24 shows the transistor level design of the gate R. Since an EXOR functionality is needed
for implementing the output functions of the reversible gates, an efficient four-transistor
EXOR function design has been chosen to implement the transistor level design.

Figure 24: CMOS implementation of gate R

Gate R was implemented using 12 transistors. The implementation of gate R1 is


shown in Fig. 25. It took 26 transistors to implement the design. Fig. 26 shows the four inputs
and one output of the reversible gate R2. As the first three output of the gate are just the direct
wire connection from input, the input d, e, and f of the gate can be used as the output.

30
Figure 25: CMOS implementation of gate R1

Figure 26: CMOS implementation of gate R2

All the gates (R, ,R1, and R2) can be combined to form a reversible cell, which minimizes the
number transistors by a count of four as a function a XOR c needed by gate R and gate R1 are
shared. Thus, the cell can be implemented with a total of 46 transistors (Fig. 27).

31
Figure 27: Reversible Cell

32
 Estimation of Power:

The implementation of the full adder using the reversible gate R has been compared
with that implemented using the Fredkin Gate. The design implemented using the proposed
gate is found to be more efficient; it requires fewer gates, fewer garbage outputs, and
consumes less power. The power analysis has been made using Xilinx ISE version 6.1. The
full adder with propagate was implemented at the behavioral level [VHSIC (very high speed
integrated circuit) hardware description language (VHDL)] using the Fredkin gate, the
proposed gates, and the Toffoli gate. The following Table shows the comparison of the
designs using these gates.
.

The following Table (in the next page) shows the number of gates, garbage outputs,
and the power estimation of several benchmark circuits implemented in VHDL using
reversible gate R, testable gate R1/R2, and the Fredkin and Toffoli gates.

33
 Conclusion:
Three new reversible logic gates have been shown. Two of these gates can be paired
such that any function implemented using the gate pair will be online testable. A fault in any
of these gates will produce a single bit output error. In order to ensure that errors can be
detected online, two rail checkers are incorporated in the design. Such checkers are also
constructed from the proposed reversible logic gates.
A CMOS implementation of reversible gates are also shown. The most important
requirement of the reversible gate-based design is the reduction of garbage outputs rather than
the actual number of gates.

34
Chpater 5
Reversible Memory Elements
 Introduction:
Despite the great potential of reversible logic and these
endorsements from the leaders in the field, little to no work has been done
in this area of sequential reversible logic. Many researchers are
investigating combinational logic synthesis techniques but have seemed
to overlook the necessity of memory elements if one is planning to
implement many of our day to day circuits in reversible technologies.
Here, we have shown the small amount of existing work in the area, and
the reversible implementation of the RS-latch. Then we will present flip-
flop implementations based on this latch.

The following Table lists the behavior of each of the most commonly
used reversible gates. The behavior describes how each input becomes
transformed to produce the outputs when the gate is applied. For instance,
if the input values for x, y and z are 110 then the output values will be x,
y, xyXOR z or 111 after the Toffoli gate is applied.

The symbols used here for the gates are shown in the figure below:

35
Figure 28

A number of researchers including Toffoli and Frank discuss the


potential for sequential reversible logic, but do not present any structures
for its realization. Fredkin and Toffoli appear to be the first to suggest a
conservative logic sequential element in the form of a J¯K flip-flop, and
Picton suggests a reversible RS-latch. He uses the basic Fredkin gate to
build this latch, as shown in Figure 29.

Figure 29

Here we will concentrates on the use of a basic memory element


such as the RS-latch, as it is the traditional building block for the clocked
flip-flop structures that are our goal. The problem with Picton’s model is
that the concept of reversible logic is predicated on the fact that not only
can one not allow the destruction of data (e.g. a signal value) but one can
not allow the arbitrary creation of data. This means that fan-out is not
permitted. As shown in Figure 29 Picton’s latch requires two instances of
fan-out. We address this issue further on.

 Addressing the problem of fan-out:

As pointed out in the previous section, fan-out is required for


Picton’s model of a RS-latch built using two Fredkin gates. An examination
of the functionality of this latch leads to a relatively simple, almost
cosmetic solution; instead of requiring fan-out from the Q and Q signals to
both the outputs and back to the inputs, why not use the outputs of the
Fredkin gate for one of these uses. A solution demonstrating this is shown
in Figure 30.

Figure 30

36
State tables for the two reversible latches latches illustrate that the
behaviours are the same, as shown in the following Table (in the next
page)

 Constructing a new reversible RS-latch:

One approach to determining reversible memory elements is to take


existing memory elements, built from traditional logic, and replace the
traditional components with reversible components. For instance, the NOR
gate may be used in the design of the RS-latch. The NOR gate is clearly
not reversible; one problem is that there is only one output and two inputs.
The following Table (A) shows the additions required to create a reversible
equivalent of the NOR gate’s behaviour.

Three outputs are required, and so an additional input must be


added. Note that we set this input to 0 and so this is technically only half
of the function’s truth table; however, we are only interested in the
behaviour defined for these inputs. If we rearrange the table as shown in
the above Table (B) we achieve a reversible function which happens to
match the behaviour of the Toffoli gate. We can thus implement the RS-
latch as shown in Figure 31.

37
Figure 31

Another approach to determining reversible memory elements is to


define the desired element’s state table, manipulate as needed in order to
get a reversible function, and then perform reversible logic synthesis
techniques. For instance, such a state table might begin as shown in the
following Table.

After creating the state table the next stage is to create a cascade of
reversible gates that effects the required transitions. However, in the case
above the cascade quickly becomes quite involved, and so the designs
from Figures 30 and 31 are a more efficient choice.

 Reversible clocked flip-flops:

According to Sasao there are four standard flip-flop designs. Although one can
construct more complex latches with characteristics similar to those of the flip-flops, we
argue that the usefulness of a non-clocked memory element is limited. Additionally, we
investigate only edge-triggered flipflops, since clocked latches may be difficult to use due to
the restrictions inherent in the period and and width of the clock pulse.

1) Master-Slave flip-flops: Figure 32 illustrates the standard construction, using


traditional logic, of a master-slave flipflop. Since previous sections have determined
structures for reversible RS-latches, if we can also determine reversible structures
with behaviours equivalent to the other elements in the master-slave flip-flop then
these can be combined to create a reversible implementation. A Toffoli gate can be
used for the AND structures, leaving only the problem of fan-out. In Figure 32 (C) the
fan-out from the clock is generated through the use of a Fredkin gate, which gives two

38
identical outputs and one output that is the negation of the other two. This is useful since we
also need a negated clock output, which is then fanned-out to the two inputs to the second RS-
latch. Another issue lies in the question of how to generate the clock signal for a reversible
circuit.
2) D flip-flop: An extension of the Master-slave flip-flop is to disallow S = R. One way
to do this is to use a single input D which is connected directly to the S input, and
then D is connected to the R input. The behaviour is then as shown in Figure 33 (A).
Figure 33 also shows both the traditional logic implementation of the D flip-flop and
an equivalent reversible.

Figure 32

39
Figure 33

3) JK flip-flop: In many cases it is desirable to be able to retain the


value of Q in the memory element, or even to negate it. The JK flip-
flop allows this by making use of additional feedback. The behaviour
is then as defined in Figure 34 (A). Figure 34 also shows a traditional
implementation of this flipflop and an equivalent reversible
implementation.

40
Figure 34
4) T flip-flop: Again, a modification to the inputs of the JK flip-flop is
possible in order to build a slightly different flip-flop. If we let T = J =
K then the flip-flop behaviour becomes as given in Figure 35 (A).
Figure 8 also shows a traditional implementation of this flip-flop and
an equivalent reversible implementation.

Figure 35
 Discussion:
There is some argument as to whether a flip-flop such as the JK flip-
flop shown in Figure 34 (C) can be considered reversible. Certainly it is
constructed entirely of reversible elements; however, by introducing not
one but two stages of feedback is the final structure truly reversible?

41
Further examination is required here in order to satisfy the doubts raised
by some researchers.

42
Chapter 6
Quantum Search Applications
 What is Quantum Computing?
In quantum computers we exploit quantum effects to compute in
ways that are faster or more efficient than, or even impossible, on
conventional computers. Quantum computers use a specific physical
implementation to gain a computational advantage over conventional
computers. Properties called superposition and entanglement may, in
some cases, allow an exponential amount of parallelism. Also, special
purpose machines like quantum cryptographic devices use entanglement
and other peculiarities like quantum uncertainty.
Quantum computing combines quantum mechanics, information
theory, and aspects of computer science [Nielsen, M. A. & Chuang, I. L.
2000]. The field is a relatively new one that promises secure data transfer,
dramatic computing speed increases, and may take component
miniaturization to its fundamental limit.

 Quantum Computation & Reversible Computation:

⇒ Quantum Computing is a coming revolution – after recent demonstrations of


quantum computers, there is no doubt about this fact. They are reversible.
Top world universities, companies and government institutions are in a race.

⇒ Reversible computing is the step-by-step way of scaling current computer


technologies and is the path to future computing technologies, which all happen
to use reversible logic.
– DNA
– biomolecular
– quantum dot
– NMR
– nano-switches

In addition, Reversible Computing will become mandatory in any technology,


because of the necessity to decrease power consumption

 Quantum Computing : Bits and Qubits

Quantum computers perform operations on qubits which are analogous to


conventional bits (see below) but they have an additional property in that they can be in a
superposition. A quantum register with 3 qubits can store 8 numbers in superposition
simultaneously [Barenco, A. Ekert, A. Sanpera, A. & Machiavello, C. 1996] and a 250 qubit
register holds more numbers (superposed) than there are atoms in the universe! [Deutsch, D.
& Ekert, A. 1998].

The amount of information stored during the ”computational phase” is essentially


infinite - its just that we can’t get at it. The inaccessibility of the information is related to
quantum measurement: When we attempt to readout a superposition state holding many

43
values the state collapses and we get only one value (the rest get lost). This is tantalizing but,
in some cases, can be made to work to our computational advantage.

Single Qubits:
Classical computers use two discrete states (e.g. states of charging of a capacitor) to
represent a unit of information, this state is called a binary digit (or bit for short). A bit has the
following two values: 0 and 1.
There is no intermediate state between them, i.e. the value of the bit cannot be in a
superposition.

Quantum bits, or qubits, can on the other hand be in a state ”between” 0 and 1, but
only during the computational phase of a quantum operation. When measured, a qubit can
become either: |0> or |1> i.e. we readout 0 or 1. This is the same as saying a spin particle can
be in a superposition state but, when measured, it shows only one value.

The |> symbolic notation is part of the Dirac notation. In terms of the above it
essentially means the same thing as 0 and 1 (this is explained a little further on), just like a
classical bit. Generally, a qubit’s state during the computational phase is represented by a
linear combination of states otherwise called a superposition state.
α|0> + β|1>
Here α and β are the probability amplitudes. They can be used to calculate the
probabilities of the system jumping into |0> or |1> following a measurement or readout
operation. There may be, say a 25% chance a 0 is measured and a 75% chance a 1 is
measured. The percentages must add to 100%. In terms of their representation qubits must
satisfy:
|α|2+|β|2 = 1
This the same thing as saying the probabilities add to 100%. Once the qubit is
measured it will remain in that state if the same measurement is repeated provided the system
remains closed between measurements.

 Quantum Search:
Quantum computation is necessarily reversible, and quantum circuits generalize their
reversible counterparts in the classical domain. Instead of wires, information is stored on
qubits, whose states we write as |0> and |1> instead of 0 and 1. There is an added
complexity—a qubit can be in a superposition state that combines |0> and |1>. Specifically,
|0> and |1> are thought of as vectors of the computational basis, and the value of a qubit can
be any unit vector in the space they span. The scenario is similar when considering many
qubits at once: the possible configurations of the corresponding classical system (bit-strings)
are now the computational basis, and any unit vector in the linear space they span is a valid
configuration of the quantum system. Just as the classical configurations of the circuit persist
as basis vectors of the space of quantum configurations, so too classical reversible gates
persist in the quantum context. Non-classical gates are allowed, in fact, any (invertible) norm-
preserving linear operator is allowed as a quantum gate. However, quantum gate libraries
often have very few non-classical gates . An important example of a non-classical gate (and
the only one used in this paper) is the Hadamard gate H. It operates on one qubit, and is
defined as follows: H|0> = (1/√2)(|0>+|1>), and H|1> = (1/√2) (|0>−|1>). Note that
because H is linear, giving the images of the computational basis elements defines it
completely.

44
During the course of a computation, the quantum state can be any unit vector in the linear
space spanned by the computational basis. However, a serious limitation is imposed by
quantum measurement, performed after a quantum circuit is executed. A measurement non-
deterministically collapses the state onto some vector in a basis corresponding to the
measurement being performed. The probabilities of outcomes depend on the measured state
— basis vectors [nearly] orthogonal to the measured state are least likely to appear as
outcomes of measurement. If H|0> were measured in the computational basis, it would be
seen as |0> half the time, and |1> the other half.
Despite this limitation, quantum circuits have significantly more computational
power than classical circuits. In this work, we consider Grover’s search algorithm, which is
faster than any known non-quantum algorithm for the same problem. Figure 28 outlines a
possible implementation of Grover’s algorithm. It begins by creating a balanced superposition
of 2n n-qubit states which correspond to the indexes of the items being searched. These index
states are then repeatedly transformed using a Grover operator circuit, which incorporates the
search criteria in the form of a search-specific predicate f (x). This circuit systematically
amplifies the search indexes that satisfy f (x) = 1 until a final measurement identifies them
with high probability.
.

Figure 28

A key component of the Grover operator is a so-called “oracle” circuit that


implements a search-specific predicate f (x). This circuit transforms an arbitrary basis state |xi
to the state (−1) f (x)|x>. The oracle is followed by (i) several Hadamard gates, (ii) a
subcircuit which flips the sign on all computational basis states other than |0i, and (iii) more
Hadamard gates. A sample Grover-operator circuit for a search on 2 qubits is shown in Figure
29 and uses one qubit of temporary storage. The search space here is {0,1,2,3}, and the
desired indices are 0 and 3. The oracle circuit is highlighted by a dashed line. While the
portion following the oracle is fixed, the oracle may vary depending on the search criterion.
Unfortunately, most works on Grover’s algorithm do not address the synthesis of oracle
circuits and their complexity. According to Bettelli et al., this is a major obstacle for
automatic compilation of high-level quantum programs, and little help is available.

45
Figure 29

Lemma With one temporary storage qubit, the problem of synthesizing a quantum circuit
that transforms computational basis states |x> to (−1) f (x)|x> can be reduced to a problem
in the synthesis of classical reversible circuits.

Proof: Define the permutation πf by πf (x,y) = (x,y XOR f (x)), and define a unitary
operator
Uf by letting it permute the states of the computational basis according to πf . The additional
qubit is initialized to |−> = H|1> so that Uf |x,−> = (−1) f (x)|x,−>. If we now ignore
the value
of the last qubit, the system is in the state (−1) f (x)|x>, which is exactly the state needed for
Grover’s algorithm. Since a quantum operator is completely determined by its behavior on a
given computational basis, any circuit implementing πf implements Uf . As reversible gates
may be implemented with quantum technology, we can synthesize Uf as a reversible logic
circuit.

Quantum computers implemented so far are severely limited by the number of


simultaneously available qubits. While n qubits are necessary for Grover’s algorithm, one
should try to minimize the number of additional temporary storage qubits. One such qubit is
required by the above Lemma to allow classical reversible circuits to alter the phase of
quantum states.

Corollary 37 For permutations πf (x,y) = (x,y XOR f (x)), such that {x : f (x) = 1} has
even cardinality no more temporary storage is necessary. For the remaining πf , we need an
additional qubit of temporary storage.

The following Table gives the optimal circuit sizes of functions πf corresponding to
3-input 1-output functions f (“3+1 oracles”) which can be synthesized on four wires. These
circuits are significantly smaller than many optimal circuits on four wires. This is not
surprising, as they perform less computation.

In Grover oracle circuits, the main input lines preserve their input values and only the
temporary storage lines can change their values. Therefore, Travaglione et al. studied circuits
where some lines cannot be changed even at intermediate stages of computation. In their

46
terminology, a circuit with k lines that we are allowed to modify and an arbitrary number of
read-only lines is called a k-bit ROM-based circuit. They show how to compute permutation π
f arising from a Boolean function f using a 1-bit quantum ROM-based circuit, and prove that if
only classical gates are allowed, two writable bits are necessary. Two bits are sufficient if the
CNT gate library is used. The synthesis algorithms of Travaglione et al. rely on XOR sum-of-
products decompositions of f .
We outline their method in a proof of the following result.

Lemma There exists a reversible 2-bit ROM-based CNT-circuit computing (x,a,b)→(x,a,b


XOR f (x)), where x is a k-bit input. If a function’s XOR decomposition consists of only one
term, let k be the number of literals appearing (without complementation). If k>0 then
3·2k−1−2 gates are required.

Proof: Assume we are given an XOR sum-of-products decomposition of f . Then it suffices to


know how to transform (x,a,b)→(x,a,b XOR p) for an arbitrary product of uncomplemented
literals p, because then we can add the terms in an XOR decomposition term by term. So,
without loss of generality, let p = x1 . . . xm. Denote by T(a,b;c) a T gate with controls on
a,b and inverter on c.
Similarly, denote by C(a;b) a C gate with control on a and inverter on b. Number the
ROM wires 1. . . k, and the non-ROM wires k+1 and k+2. Let us first suppose that there is
at least one uncomplemented literal, and put a C(1;k+2) on the circuit; note that C(1;k+2)
applied to the input (x,a,b) gives (x,a,b XOR x1). We will write this as C(1;k+2) : (x,a,b) →
(x,a,b XOR x1), and denote this operation byW1. Then, we define the circuit W'2 as the
sequence of gates T(2,k+2;k+1)W0T(2,k+2;k+1)W0, and one can check that
W'2:(x,a,b)→(x,a_x1x2,b). We define W2 by exchanging the wires k+1 and k+2; clearly
W2:(x,a,b) → (x,a,b XOR x1x2). In general, given a circuit Wl : (x,a,b XOR x1 . . . xl−1) →
(x,a XOR x1 . . . xl), we define W'l+1=T(l+1,k+2;k+1)WlT(l+1,k +2;k+1)Wl ; one can
check that Wl0+1 : (x,a,b) → (x,a XOR x1 . . . xl+1,b). Define Wl+1 by exchanging the
wires k+1 and k+2; then clearlyWl+1 : (x,a,b)→(x,a,b XOR x1 . . . x1+1). By induction, we
can get as many uncomplemented literals in this product as we like.

47
(Appendix A)
Irreversibility and Heat Generation
The search for faster and more compact computing circuits leads directly to the
question: What are the ultimate physical limitations on the progress in this direction? In
practice the limitations are likely to be set by the need for access to each logical element. At
this time, however, it is still hard to understand what physical requirements this puts on the
degrees of freedom which bear information. The existence of a storage medium as compact as
the genetic one indicates that one can go very far in the direction of compactness, at least if
we are prepared to make sacrifices in the way of speed and random access.
Without considering the question of access, however, we can show, or at least very
strongly suggest, that information processing is inevitably accompanied by a certain
minimum amount of heat generation. In a general way this is not surprising. Computing, like
all processes proceeding at a finite rate, must involve some dissipation. Our arguments,
however, are more basic than this, and show that there is a minimum heat generation,
independent of the rate of the process. Naturally the amount of heat generation involved is
many orders of magnitude smaller than the heat dissipation in any practically conceivable
device. The relevant point, however, is that the dissipation has a real function and is not just
an unnecessary nuisance. The much larger amounts of dissipation in practical devices may be
serving the same function.
The conclusion about dissipation can be anticipated in several ways, and our major
contribution will be a tightening of the concepts involved, in a fashion which will give some
insight into the physical requirements for logical devices. The simplest way of anticipating
our conclusion is to note that a binary device must have at least one degree of freedom
associated with the information. Classically a degree of freedom is associated with kT of
thermal energy. Any switching signals passing between devices must therefore have this much
energy to override the noise. This argument does not make it clear that the signal energy must
actually be dissipated.
An alternative way of anticipating the conclusions is to refer to the arguments by
Brillouin and earlier authors, as summarized by Brillouin in his book, Science and
Information Theory, to the effect that the measurement process requires a dissipation of the
order of kT. The computing process, where the setting of various elements depends upon the
setting of other elements at previous times, is closely akin to a measurement. It is difficult,
however, to argue out this connection in a more exact fashion. Furthermore, the arguments
concerning the measurement process are based on the analysis of specific and the specific
models involved in the measurement analysis are rather far from the kind of mechanisms
involved in data processing.
In short, it can be said : The information-bearing degrees of freedom of a computer
interact with the thermal reservoir represented by the remaining degrees of freedom. This
interaction plays two roles. First of all, it acts as a sink for the energy dissipation involved in
the computation. This energy dissipation has an unavoidable minimum arising from the fact
that the computer performs irreversible operations. Secondly, the interaction acts as a source
of noise causing errors. In particular thermal fluctuations give a supposedly switched element
a small probability of remaining in its initial state, even after the switching force has been
applied for a long time. It is shown, in terms of two simple models, that this source of error is
dominated by one of two other error sources:
1) Incomplete switching due to inadequate time allowed for switching.
2 ) Decay of stored information due to thermal fluctuations.

It is, of course, apparent that both the thermal noise and the
requirements for energy dissipation are on a scale which is entirely
negligible in present-day computer components. Actual devices which are
far from minimal in size and operate at high speeds will be likely to require

48
a much larger energy dissipation to serve the purpose of erasing the
unnecessary details of the computer's past history.

49
REFERENCES

1. "Asymptoticay Zero Energy Split-Level Charge Recovery Logic" :


Younis & Knight.
2. Phys. : L. Szilard (1929)
3. Irreversibility and the Heat Generation in the Computing Process : R.
Landauer (1961)
4. Logical Reversibility of Computation : C.H. Bennett (1973)
5. Notes on the history of reversible computation : C. H. Bennett
6. Synthesis of Reversible Logic Circuits : Shende, Prasad, Marcov &
Hayes
7. Reversible Computing : Alexis De Vos (1999)
8. Reversible & Endoreversible computing : Alexis De Vos (1995)
9. Conservative Logic : E. Fredkin & T. Toffoli (1981)
10. Fault Testing of Reversible Circuits :Patel, Hayes & Marcov
11. Analyzing Fault Models for Reversible Logic Circuits : J. Zhong & J.
Muzio
12. The Physical Implementation of Quantum Computation : David P.
DiVincenzo

50

S-ar putea să vă placă și