Sunteți pe pagina 1din 10

We now take up the last major topic of

this coarse. The subject is intractable


problems. These problems are decidable but
co-locally they are said to be problems
that take at least exponential time as a
function of their input size. The reality
is a bit different. But there are problems
which there's an overwhelming amount of
empirical evidence that these problems
take exponential time, although no solid
proof of that belief. If a problem does
take time that is exponential in its input
size, then that means it can in practice
only be solved for small instances.
Suppose to be concrete that the time it
takes to solve an instance of size N is
two to the N. And doubling the speed of
machines makes essentially no difference
in how large an instance you can solve. It
adds one to the size you can solve in a
fixed amount of time. Using a thousand
machines instead of one has the effect of
adding ten to the size N. And using a
million machines, each one thousand times
faster than today's machines adds thirty
to N. You never get to really big sizes of
problem instances that you can solve. As a
result, it is generally accepted that in
order to a solution to Problem to be
considered usable in practice. It has to
run in less than exponential time, and in
particular in polynomial time for some
large polynomial. Well an algorithm that
runs in some large polynomial time, like N
to the thousandth power, is no more
practical than one that runs in time two
to the N. You find in practice that if a
problem has a polynomial algorithm at all,
then it has an algorithm that runs in some
low degree polynomial, like N squared or N
cubed at the most. In this lecture we
introduce several important preliminary
concepts. We introduce the idea of a
Turing machine that is time-bounded. It
can only run for time that is a known
function of its input size before it has
to stop and tell us whether it accepts or
rejects. We introduce the Class P of
problems or languages, it's the same thing
of course, that can be solved by a Turing
Machine that runs in polynomial time as a
function of its input size. We also meet
the Class NP which is problems that can be
solved by a Turing Machine that is
non-deterministic but has a [inaudible]
time bound along each branch. Finally,
we'll learn about polynomial time
reductions, which are reductions where the
transducer runs in time which is
polynomial in its input size. These are
used to show one problem intractable by
reducing a known intractable problem to
it. [sound] We say a Turning Machine is T
of N, time-bounded where T of N is some
function of N like N squared or two to the
N. If given an input of length N the
machine always halts at most T of N steps.
Okay? We allow the Turing Machine to have
several tapes. In some circumstances we
allow the Turning Machine to be
non-deterministic, although in that case
we will specify that it is a [inaudible]
time-bounded non-deterministic Turning
Machine. Also in that case will mean that
any, any sequence of moves of the
non-deterministic machine is no longer
than T of N. In practice, a deterministic
multi-tape Turing Machine is close to the
idea of an algorithm that runs in time
proportional to T of N or big O of T of N.
That is why some algorithms take longer in
a Turing Machine, even multi-tape, than on
a real computer these are rare. Moreover
when there is a difference, the difference
tends to be small A turn machine and the
set to be polynomial time bounded, if it
is time bounded by any polynomial. It
could be linear, quadratic, cubic or into
the thousandth power, as long as it is
some polynomial. The languages that are
accepted by polynomial time bounded turn
machines form the class P. Now P is
defined formally in terms of Turing
Machines, but it could just as well have
been stated as polynomial time on a real
computer. The reason, which we address on
the next slide, is that if an algorithm
runs in some polynomial time on a
computer, then it will run polynomial time
on a multi-tape Turing Machine, or even a
one tape Turing Machine, although the
degree of the polynomial may be higher in
some cases. That is, we saw a way to
simulate a name value store by a computer.
That is the part of a real computer that
takes the most time when simulated by a
Turing Machine. But if a computer runs for
on the order of T of N steps then it can
store or retrieve more than T of N items
in its memory. A Turing Machine can
simulate one look up or insert into a name
value store and a number of steps that is
proportional to the length of the tape
that holds the store. But that length is
proportional at most to the number of
steps the computer has taken, which is T
of N. And thus the Turing Machine takes at
most T squared of N of its own steps. If T
of N is a polynomial, then so is T squared
of N. The exponent grows of course, a
cubic algorithm on a computer might take
time proportional to N to the sixth on a
Turing Machine, but it's no worse than
that. And since we're trying to divide the
world of problems into those that have
polynomial algorithms and those that
don't, we can think Turing Machine or
computer whichever is more convenient. As
you might expect, when simulating a
program it's best to simulate a Turing
Machine, but when devising an algorithm
it's best to think about a computer
program. Here are two examples of problems
or languages, which is the same thing, in
the Class P. For each context-free grammar
G there is an algorithm, the CYK
algorithm, that takes an input string W
and tells whether W is in the language.
There's many times that this lag-,
algorithm is o of N cubed. The second
problem I want to talk about is finding a
path in a graph. Here we're given a
directed graph, that is a list of its
nodes and arcs. We are also given one node
that is the source node X and another is
the sync node Y. The answer's yes, that
there's a path in the graph from the
source to the sync. Graphs must be coded
in a [inaudible] alphabet, which should
not be hard to see, represent the
[inaudible] node by N followed by I in
binary in represent an arc by a pair of
nodes that tail and the head of the arc.
Used two special symbols to indicate the
source and sync nodes. Note that there are
M nodes, it takes order log M space to
represent one node, so N, the input
length, is actually somewhat greater than
the number of nodes and arcs, but the
difference is unimportant since we are
only worrying about polynomial versus not
polynomial. Depth for a [inaudible]
answers this question in time that is
linear in the number of nodes and arcs. On
a turing machine you might need order,
order N squared steps, since for one step
of the depth for [inaudible], you have to
locate on the input the arcs with a given
node as the tail. That could require that
you run all along the tape just to
simulate one computer step. But N squared
is still a polynomial so as far as
membership in P is concerned N squared is
just fine. And just to make sure, when we
talk about polynomial time, we talk about
every running time that is less than some
polynomial. That is the definition of B
only requires that the language be
accepted by some [inaudible] machine,
whose running time is bounded above by
some polynomial. For example there are
many algorithms that run in time, like
order N log N, but that is less than N
squared so the problems solved by
algorithms like this are surely empty.
Before proceeding, I want to examine in
detail a problem that seems to be in P but
really isn't. And I want you to understand
why, this is really important in
understanding what the class P really
means. The problem called knapsack is
this. We're given a list of N positive
integers. The answer to this instance of
nap stack is yes, if and only if we can
partition the integers into two groups
whose sums are equal. For example, if the
integers are one, two, three and four Then
i can partition them into one in four in
one group, and two and three in the other
group, and the sums in each group will be
equal. Incidentally the problem is called
Nap Sack because of the view that the
integers await the items and two hikers
want to divide the items between their two
nap sacks, so each carries equal weight.
At first glance we can solve an abstract
problem by a polynomial time dynamic
programming algorithm. That is, we
maintain a table of all the differences
and sums we can achieve by partitioning
the first J minus one integers. When we
incorporate the Jth integer, we take each
possible difference and both add and
subtract the Jth integer thus getting two
new possible differences. After looking at
all integers we see if zero is a possible
difference. To be more precise, for the
basis we consider none of the integers.
Then the table That's true for zero
difference and false for all the other
differences. For the induction, suppose we
have a table for the first J minus one
integers. We build a new table to reflect
the partitions of the first J integers,
initially each entry in the new table is
false. But suppose the Jth integer is
[inaudible] of J. For each difference, M
that was true in the old table set the
entries for M plus I's of J and also M
minus I of J to be true in the new table.
Lets compute the running time of this
algorithm as a function of the sum of the
integers. Lets say that some is S. We need
order S space to construct a table for one
value of J, since the differences must be
in the range minus S to plus S. And it
only takes order S time to construct each
table from the previous one using a real
computer, maybe it's order S squared on a
Turing machine because you have to move
the head a long distance to write each
entry. But again, when designing
algorithms and worrying only about whether
something is polynomial time or not, real
computer is the right model to think about
because the programming details are
generally easier. Okay note that N equal
to or less than S, since the integers are
each positive. That is the sum of the
integers is at least equal to the number
of integers on the list. Thus we can build
a table that corresponds to the set of all
the integers in order S squared time. S
for each of N different tables. We then
look at this table and see if zero is
true. If so, the answer to the knapsack
instance is yes. And otherwise it is no.
However that conclusion is actually
deceptive. Although it is true that we
just described an algorithm that runs in
time no more than the square of the sum of
the integers and that algorithm really
does solve the nap sack problem. It
doesn't tell us that the nap sack problem
is in P and in fact it is surely not in P
as we shall see later. The reason this
algorithm doesn't show knapsack to be in P
is that membership in P requires that the
algorithm runs in time polynomial in the
input size. But we can't just define input
size to be the sum of the integers in the
input. The input size is always the number
of cells it takes to write the input on a
Turing Machine tape. For the knapsack
problem, this input length is not
necessarily polynomial in the sum of the
integers, as we shall see on the next
slide. The longest input length occurs if
we have any integers, each of whose values
is about two to the N. If we write the
integers in binary, the input to the
Turing Machine is order N squared in
length. But a table then requires about
two to the N entries and at least that
much space to write down. That is the sum
of all N integers can be around N times
two to the N. Mccath can construct one
table in time less than its length so the
total time of the algorithm is on the
order of N squared times two to the N But
the input size is N squared and N squared
two to the N is not a polynomial function
of input length. By the way, we usually
like to have N be the actual input size
for the Turing Machine. So if we
substitute N for N squared we can say the
inputs of size N leads to an algorithm
that takes time proportional to N times
two to the squared root of N. That is
still not a polynomial. Thus the dynamic
programming algorithm we described, while
it is really a good algorithm when the
integers of limited size, does not show
the knapsack problem to be in the class P.
There is another problem which we can call
pseudo-knapsack. The question is the same,
but the integers are represented in unary
not binary. That is interger I is
represented by I ones followed by some
Marcuss symbol toseparateeintegerss. This
problem is in P, and the dynamic
programing algorithm proves that. But it
is not the classical nap sack problem
where integers are represented in binary,
the sort of rational way to represent
large integers. The second important class
of languages for our story is NP, a
nondeterministic polynomial class. Np is
defined in terms of nondeterministic
Turing Machines. The running time of a
nondeterministic Turing Machine is the
maximum number of moves it takes along any
branch, that is making any sequence of
choices. If there's a polynomial bound on
that time then the non-deterministic
machine is said to be polynomial
time-bounded. And the language or problem
it accepts is said to be in the class NP.
For example, the standard version of
knapsack where integers are represented in
binary is an NP. The non-deterministic
polynomial time algorithm that solves this
is fairly simple. First it uses its
non-determinism to guess a partition of
the integers into two subsets. This can be
done in time that is linear in the input
length using two extra tapes for the two
subsets. Then sum the subsets and compare.
Say yes if this partition yields two equal
sums. And this part can truly be done in
time that is quadratic in the input size
and can be done in linear time if you're
clever and use a few extra tapes. Thus
standard knapsack is an NP. Note that this
fact doesn't suggest a deterministic
polynomial time algorithm, since it may
take exponential time to simulate the
non-determinism of the Turing Machine. Are
P and NP really the same class of
languages? That is, can any problem that
is solved by a non-deterministic Turing
machine in polynomial time also be solved
by some deterministic Turing Machine in
polynomial time, even if the degree of the
polynomial is higher? This question was
posed by Steve Cook in 1970. At first it
didn't seem all that hard or unlikely.
After all nondeterministic [inaudible],
[inaudible] can be simulated by
deterministic [inaudible], [inaudible],
even though the number states might grow
and deterministic Turing Machines can
simulate nondeterministic ones. But the
problem is proving to be very, very
difficult, and mathematicians who once
neared the question and assumed it was
easy because computer scientists have
thought it up, now recognize it as one of
the most important mathematical question,
perhaps the most important unsolved
question. However there are thousands of
problems that are in N P but for which no
algorithm N P has been found. And
unfortunately neither is there proof that
these problems are not N P. What we do
have is a linkage of the large class of
problems called N P Complete Problems
which we discuss on the next slide. What
we do know is then that either all these
problem are in P or none of them are. So
the mutually enforce the notion that none
of them are since many have been worked
for decades and no polynomial time
algorithm for any of them has been found.
So we're going to address the question of
whether P equals N P by identifying
complete problems for N P. We say a
problem is NP complete if the following is
true about the problem. If the problem is
NP then P equals NP, that is every problem
in NP is also NP. It turns out that almost
every problem that is known to be an N P,
but is not known to be N P is N P
complete. There is only one well known
exception, [inaudible]. Given two graphs
is there a one to one matching of nodes
between the two graphs that makes the
graphs identical? This problem is known
the be N P just guess matching the two
nodes and check that the right edges
exist. But there is no polynomial time
algorithm known and neither is there a
proof that the graph is M P complete But
[inaudible] is an exception to what
appears to be an almost general rule, if
it is an NP and it is not known to be NP,
then it is NP complete. While the
definition of NP complete is merely states
that there has to be some way proving that
P. Equals NP from the assumption that the
problem is NP. There is a standard way of
making such proofs and it appears to be
sufficient for all the NP complete
problems we know about. This method
involves reductions of the type we talked
about in Turing Machines in general, but
with the condition that the transducer in
polynomial time is a function of its input
size. Intuitively a complete problem for a
class embodies, in some sense, every
problem in the class. For example post
correspondence problem embodies every
Turing Machine even though it is hard to
see PCP as. Computation. It only seems to
be about concatenating strings in
constrained ways. So it might surprise you
to know that each NP complete problems,
such as nap sack, embodies all
nondeterministic polynomial time
computation, even though the nap sack
problem seems to be about anything but
computation. So in order to show problem L
to be NP complete we have to show that
every problem in NP is somehow embedded in
L. We need a transformation from every
problem in NP to L and this transformation
has to be sufficiently fast that if we had
to deterministic polynomial timed
algorithm for L, then we could use it to
build a deterministic polynomial timed
algorithm for each problem in NP. We are
going to define a polynomial time
transducer. Notice that people frequently
shorten polynomial time to poly time, and
we will start doing that to, So a poly
time transducer is a deterministic Turing
Machine in that it takes an input of
length N, runs for some polynomial number
of steps, P of N. And produces an output
on an output tape. It is important to
observe that although we do not restrict
the output length since the Turing Machine
only runs P of N steps. It could not write
more than P of N symbols, thus the length
of the output of the polynomial time
transducer is always polynomial in the
length of its input. Here is a picture of
a poly time transducer. He can have any
fixed number of tapes, one is the input
tape, and one is the output tape. We
argued on the last slide that the output
length was the polynomial in the link of
the input, but the real constraint on the
poly time transducer is on how long it
runs. It is not acceptable to have it run
for time that is exponential in its input
length, even if the output is short.
Consider two languages or problems say
LMN. We say L is poly-time reducible to M
if there is a poly-timed translucor T that
takes and inputs W that is in the instance
of L, produces an output of X that is in
the instance of M and the adds are on W is
the same as the answer to M on X. That is
W is L only if X is in M. Here is a
picture to help us remember what a poly
time reduction does. On the left is set of
strings for the alphabet of L divided into
those that are in L and those that are not
in L. On the right is the set of strings
over the alphabet of M, also divided by
its compliment. In the middle is the
poly-timed translucor of T. Every string
in L is transformed by T into a string
that is in M. There can be strings in M
that are not the target of strings in L.
And every string not in L but over the
alphabet of L is transformed by T into a
string that is over the alphabet of M, but
is not in M. Again there can be
compliments of strings that are not the
target of any strings in the compliment of
L. Formally we say a problem or language
of MP complete if for every language L or
MP there is a poly timed reduction from L
to M. An important consequence of the fact
that M is MP complete is that if M has a
poly timed algorithm then so do every L in
MP, that is the classic language of P or
MP are the same, or P equals MP. Notice
that earlier we suggested that the
definition of MP completeness was simply
that the language had this property. Steve
Cooks original definition of MP complete
was exactly that. And it is often referred
to as Cooks completeness. Cook
concentrated on showing one particular
problem- the question of whether an
expression of proposition logic was
satisfiable that is made true by some
assignment of truth values to the
propositional variables. But shortly after
Cook wrote his original paper on MP
completeness, Dick Carp wrote another
paper that showed many of the classical
problems that have been puzzling
mathematicians for centuries where MP
complete. Carp used only poly-timed
reductions to the problems Cook had
completed. Since then it is generally
accepted that the preferred definition of
NP complete and this is the one we gave
here, the existence of poly-timed
ductions. To make the distinction this
notion of MP completeness is often called
Carp Completeness. So here is the plan for
proving certain problems to be NP
complete. Here is all of NP, sad the
satisfiable problem for propositional
logic that we just discussed is one
problem in NP. Cook's theorem is that
every problem and then MP reduces in
poly-timed to the sad problem. So sad is
the first known MP complete problem. Cook
also proved a restrictive form of sad
called Three Sad to be MP complete by
reducing sad to three sad. We'll learn
about the three sad restriction is
shortly, but in brief it is sad
restrictive to expressions that are the
and of clauses with three literals per
term. A literal is a variable or a negated
variable and a clause is the or of
literals. Then from three sad we do poly
timed reductions either directly or
Indirectly. Each problem we can reach from
sack by a chain of poly-time reductions is
thus proved NP complete. But before we
embark on this quest, let's make sure that
a polynomial time reductions work in the
sense that they let us draw the desired
conclusion about all of NP reducing to the
target problem. So suppose M has a
poly-time algorithm, say running time Q of
N for some polynomial Q. Let that be a
poly-time transducer T from some problem L
to M. And let the time taken by T be P of
N for some polynomial. [inaudible] the
output of T given an input of length N is
at most of length P of N, so when we run
the algorithm for N on this input of
length P of N, the algorithm takes time Q
of P of N. Note that a polynomial of a
polynomial is a polynomial. The degrees of
the polynomial are multiplied but it's
still a polynomial. We claim there is a
poly-time algorithm for L, given an input
W of length N for L, apply the transducer
T to W. The result is Now put X at length
of P and N and more importantly T takes
time only P of N to produce this output.
Apply to X the algorithm to tell whether
it is in M. As we observed in the previous
slide this part takes time Q of P of N.
And in return the answer to W whatever the
N algorithm says about X. The total time
of this algorithm is P of N plus Q of P of
N which is polynomial of P and QR. It is
the correct algorithm because that the
fact that T is a poly-timed translucor
from L to M says that the answers to input
W and output X are the same.

S-ar putea să vă placă și