Sunteți pe pagina 1din 17

C o m p u t i n g 25, 2 9 ~ 4 5 (1980) Computing

9 by Springer-Verlag 1980

Dynamic Programming Algorithms for the Zero-One


Knapsack Problem*
P. T o t h , B o l o g n a

Received September 4, 1979

Abstract - - Zusammenfassung

Dynamic Programming Algorithms for the Zero-One Knapsack Problem. New dynamic programming
algorithms for the solution of the Zero-One Knapsack Problem are developed. Original recursive
procedures for the computation of the Knapsack Function are presented and the utilization of
bounds to eliminate states not leading to optimal solutions is analyzed. The proposed algorithms,
according to the nature of the problem to be solved, automatically determine the most suitable
procedure to be employed. Extensive computational results showing the efficiency of the new
and the most commonly utilized algorithms are given. The results indicate that, for difficult
problems, the algorithms proposed are superior to the best branch and bound and dynamic
programming methods.

Algorithmen der dynamischen Optimierung f'~ das 0-1-Knapsack-Problem. Es werden neue Algorith-
men entwickelt, welcher das 0-1-Knapsack-Problem mit dynamischer Optimierung 16sen. Ftir die
Berechnung der Knapsack-Funktion werden rekursive Prozeduren vorgelegt, dann wird die Ver-
wendung yon Schranken untersucht, mit denen sich Zust~inde ausscheiden lassen, die nicht zu
optimalen L6sungen fiihren. Die vorgeschlagenen Algorithmen bestimmen automatisch das am
besten geeignete LSsungsverfahren. Ausfiihrliche numerische Ergebnisse erlauben es, die neuen
Algorithmen mit den gebr~iuchlichsten bekannten Verfahren zu vergleichen. Die Vergleiche weisen
darauf hin, dab die vorgeschlagenen Algorithmen den besten bisher bekannten Ans~itzen mit
Branch and Bound und dynamischer Optimierung iiber!egen sind.

I. Introduction

T h e U n i d i m e n s i o n a l Z e r o - O n e K n a p s a c k P r o b l e m is d e f i n e d by:

maximize P = ~ Pi xi (1)
i=l
s u b j e c t to"
• wi x i< W (2)
i=1

* An earlier version of this paper was presented at the TIMS/ORSA Joint National Meeting in
San Francisco, May 1977.

0010-485 X / 8 0 / 0 0 2 5 / 0 0 2 9 / $ 03.40
30 P. Toth:

x~ =0, 1 (i = 1,..., n). (3)

Without loss of generality we assume that W, all the profits pi and all the weights
w~ are positive integers. In addition, the following assumptions can be stated:

• wi> W (4)
i=1

wi_<W (i=l, ..., n) (5)

The Zero-One Knapsack Problem is a well known problem and several efficient
algorithms have been proposed for its solutiorL These algorithms can be
subdivided into two classes: dynamic programming procedures (Horowitz and
Sahni [4], Ahrens and Finke [1]) and branch and bound methods (Kolesar [63,
Greenberg and Hegerich [3], Horowitz and Sahni [4], Nauss [11], Barr and
Ross [2], Zoltners [13], Martello and Toth [8]).
The computational performance of the branch and bound algorithms depends
largely on the type of data sets considered. Martello and Toth have shown
in [93 that the data sets where the values of Pi and wi are independent, are
much easier than those where a strong correlation between Pi and w~ exists. The
dynamic programming algorithms are less affected by the kind of data sets
and are generally more efficient than the enumeration methods for "hard"
problems, that is for problems having a strong correlation between wz and Pv
Unfortunately, for this kind of problem, the storage requirements of dynamic
programming procedures grow steeply with the size of W, so the only hard
problems which can be solved in a reasonable amount of time, are those having
moderate values of W.
The number of variables of the problem defined by (1), (2) and (3) can be
decreased by utilizing a reduction procedure developed by Ingargiola and
Korsh [53 and improved by Toth [12]. In many cases, the dynamic program-
ming algorithms get great advantage from the application of such reduction
procedure, because not only n but also the value of W is decreased. However
this procedure gives but a small contribution to the solution of the hard problems,
because, as shown in [12], for such problems only a small reduction in the
number of the variables and in the size of the knapsack can be obtained.
In this paper several dynamic programming procedures for the solution of the
Zero-One Knapsack Problem are presented. In addition, the utilization of
upper bounds, inserted in the procedures in order to eliminate the states not
leading to optimal solutions, is analyzed. The proposed algorithms, according to
the nature of the problem to be solved, automatically determine the most suitable
procedure to be employed.
An extensive computational analysis is performed in order to evaluate the
efficiency of the new algorithms and that of the most commonly utilized branch
and bound and dynamic programming techniques.
Dynamic Programming Algorithms for the Zero-One Knapsack Problem 31

2. Dynamie Programming Procedures


The dynamic programming recursive equations for the Zero-One Knapsack
Problem can be obtained in the following way.
For each integer m (1 _<m < n) and for each integer z (0_<z _< W), we can define:

From (6) we have:


fl(z)=0, for 0 < z < w ~ ;
fl (z)=Pl, for wl<_z<W.
The recursive equations for the m-th stage (m=2, ...,n) are given by:
fm (z) =fro-1 (z), for 0 _<z < w,,; (7)
fm (z)=max ~,,-1 (z),fm-1 (z--w,,)+p,,}, for win<z< W.
Directly utilizing the recursive equations (7), it is possible to develope a
procedure for the computation of the values f,~ (z) at the m-th stage, with
m_> 2, when the values fro_ 1 (z), at the ( m - 1)-th stage, are available. The following
variables are assumed to be known before execution of the procedure.

v =min

b =2"-1;
~w~,W
ki=l 1 ;

F~ =f,,_ 1 (z), for z = 0, 1, ..., v;


Xz={Xm_l, Xm_2, . . . , X 1 } , for z = 0 , 1 , ...,v,
where values x~ define the partial optimal solution corresponding to fro-~ (z), i.e.
m--1
Z~ E Wi Xi
i=1
and
m--1
(z) = Z ;,
i=1

From a computational point of view, it is worthwhile to express each set X~ as


a bit string, so this notation will be used in the following. After execution of
the procedure, v and the vectors (Fz) and (Xz) are relative to the m-th stage.
P R O C E D U R E P1
1. If v= W, go to 4; otherwise, set u=v, v = m i n {v+wm, W}, Fv=F,, X~=X,.
2. Set z=v-1.
3. If z<_u, go to 4; otherwise, set F~=F,, Xz=X ., z = z - 1 and repeat 3.
4. Set y=v--w,,,f=Fy+pm; if F,,<f set Fv=f X~=Xy+b.
32 P. Toth:

5. Set z = v - 1 .
6. If z <win, return.
7. Set y = z - w,,, f = Fy + Pro. If F: < f set F= = f X= = Xy + b; in any case, set
z = z - 1 and go to 6.
In many cases it is possible to reduce the number of states considered at a
given stage, eliminating all the states (F=, X=) for which there exists at least one
state (Fy, Xy) having Fr>_F= and y<z. This technique has been utilized by
Horowitz and Sahni [4] and Ahrens and Finke [1]. For its application, it is
necessary to develop a new procedure for the computation of all the undominated
states at the m-th stage. The following variables are assumed to be known
before execution of the procedure at the m-th stage:
s,,_ 1 =number of states at stage ( m - l ) ;
b =2m-1;
L l j =total weight of the j-th state, for j = 1, ..., s~_~;
F l j = total profit of the j-th state, for j - - 1, ..., sm_ ~;
X l j ={Xm_x, Xm_2, ...,Xl} , for j = l , ...,s,,_l;
where values x i represent the partial solution of the j-th state, that is
m-1 m--1

Llj= ~ wix i and f l j = ~ pixi.


i=1 i=1

After execution of the procedure, the number of states, the total weights, the
total profits and the sets of partial solutions relative to the m-th stage, are
represented, respectively, by s,,, (L2k), ( f 2 k ) a n d (X2k). The sets (Xlj) and
(X2k) are expressed as bit strings. The vectors (Llj), (L2k) , (Flj) and (F2k) are
ordered according to ascending values.

P R O C E D U R E P2

. Set L l o = F l o = X l o = F 2 o = O , h = l , k=0, j = 0 , y=wm.


2. Three possibilities exist:
a. L l h < y : if Flh>F2g, set k = k + l , L2k=Llh, F 2 k = F l h , X2k=Xlh.;
in any case, if h=sm_ 1, go to 3; otherwise, set h = h + l and
repeat 2.
b. L l h > y : s e t f = F l j + p ~ ;
i f f > F 2 k , set k = k + l , L 2 k = y , F2k= f X 2 k = X l j + b ;
in any case, set j = j + 1, y = L l j + w m and repeat 2.
c. Llh=y: set f = F l j + p m , x = X 1 j + b , j = j + l , y = L l j + w m ;
if Flh>f, set f = F l h , x = X 1 h.
If f>F2k, set k = k + l , L2k=y , F2k= f X2k=X;
in any case, if h=sm_ ~, go to 3; otherwise, set h = h + l and
repeat 2.
Dynamic Programming Algorithms for the Zero-One Knapsack Problem 33

3. If y>W, go to 4; otherwise, setf=Fl~+pm;


if f>F2k, set k = k + l , L2k=Y, F2k=f, X2g=XI~+b;
in any case, if j=s,,_l, go to 4; otherwise, set j = j + l , y = L l j + w m and
repeat 3.
4. Set sm=k; return.
I t must be noted that the maximum value of s m is given by min { 2 " - 1 ; W}.
Procedure P2 requires no specific ordering of the variables Xl, Xz, ..., x m. How-
ever, its efficiency greatly increases if the variables are ordered according to
decreasing values of the ratios pi/wi, because in such a way the number of
undominated states at each stage is reduced. It is worthwhile to note that if the
previously mentioned reduction procedure is utilized before execation of the
algorithm, the variables are already correctly ordered, because such ordering
is required by the reduction procedure; therefore, in the following, this ordering
will be assumed.
In [1] Ahrens and Finke presented a similar procedure which, however, does not
completely remove the dominated states: it is possible that two states, say j
and k with j < k , may have L 2 j = L 2 k and F 2 j < F 2 k, which implies that state j
was not removed.

Example:
Let n = 5 , w=191.
(Pi) =(61, 45, 33, 61, 13)
(wi) = (80, 60, 44, 87, 21)
Fig. 1 gives the total weights (L j) and the total profits (Fj) of the undominated
states at each stage m. The optimal solution of the problem is (x~)=(1, 1, 1,0,0)
with a maximum profit P = 139. The total number of the undominated states
is 32; applying procedure P1 the number of states should be 793.

m=l m=2 m=3 m=4 I m=5


I
I I I [ I
rj I~ Fj Lj 11 J~j 1 FJ 1 Fj 1 Fj
I I I I
I I
80 i 61 601 45 44 33i 44 i 33 21 1 13
I 80 1 61 60 451 60 1 45 441 33
t
140 tI 106 80 1 61 80 1 61 60 1 45
I 46
I
104 I 78 104 1 78 65
I
I
1 124 1 94 124 I 94 80 '~ 61
I I
I 140 106 140 I 106 101 ~ 74
k I
I I 1841 139 167 hi 122 104 ~ 78
I I I
I I I 184 1 139 124 [ 94
I I
I
1 I
140 106
I I I
I
145 I 107
I
1I I
I t
I
I
161 I 119
I I I 167 i 122
I I ! I
I i I 184 I 139
I I I

Fig. 1
3 Computing 25/1
34 P. Toth:

3. Previous Dynamic Programming Algorithms


Horowitz and Sahni presented in [4] an algorithm based on the subdivision of the
original problem of n variables into two subproblems respectively of q = In/2]*
and r = n - q variables. For each subproblem a list is computed containing all the
undominated states relative to the last (respectively the q-th and the r-th)
stage; then the two lists are merged in order to find the optimal solution to the
original problem.
For the merging of the lists, the following procedure can be utilized. The two
lists are defined, respectively, by the number of states s3 and s4, the total weights
(L3i) and (L4~), the total profits (F3i) and (F4k), the sets of partial solutions
(X3~) and (X4k), with j = l , ...,s3 and k = l , ...,s4.
After execution of the procedure, the maximum profit and the optimal solution
to the original problem are represented, respectively, by f and X.
P R O C E D U R E P3
1. Set L3o=F3o=L4o=F4o=O , X 3 o = X 4 o = ~ , j=s3, k = l , L 4 ~ 4 + l = W + l ,
f=0.
2. If L3~+L4k> W, go to 4.
3. Set k = k + l and go to 2.
4. Set g=F3j+F4k_l; if g>f, set f = g , X = X 3 j ~ X 4 k _ r
5. If j = 0 , return.
Otherwise, set j = j - 1; if L 3i + L 4k > W, repeat 5; otherwise, go to 3.
The main feature of the Horowitz and Sahni algorithm is given by the property
of having, for the worst cases, two lists each of (2q - 1) states, instead of one
list of (2~ - 1) states as required by the original problem. So, for the worst
cases, the algorithm reduces computing times and storage requirements by a
square root factor, with respect to a direct application of procedure P2 to the
original problem. However, in almost the totality of problems, the number of
undominated states is much less than the corresponding maximum number,
both because many states are dominated and because generally the value of W
is much less than such a maximum number (consider that for n=60, average
value of wi=1000and W=0.5 ~ wi, we have 2 q - 1 = 2 3 ~ and W ~-
i=i
~-30000); so the improvement given by the Horowitz and Sahni algorithm is
greatly impaired.
In [1], Ahrens and Finke proposed an algorithm where the technique utilized
by Horowitz and Sahni is combined with a branch and bound procedure in
order to reduce the storage requirements. This algorithm works well for hard
problems having low values of n and very high values of wi and W, but has

* taJ =largest integer_<a; [a] = smallest integer> a.


Dynamic Programming Algorithms for the Zero-One Knapsack Problem 35

the disadvantage that the branch and bound procedure is always executed, even
if the storage requirements are not excessive and therefore its execution could be
avoided.

Example:
Let us consider the previous example in order to illustrate the Horowitz and
Sahni algorithm. Fig. 2 gives the undominated states at each stage of the two
subproblems (q = 2). The total number of states is 15.

m=l m=2 m=5 m=4 m=3


I I I
,I
I
rj Lj Ir Fj Lj i
I Lj 1I Fj
I i
J
1 80 I 61 60 I 45 21 1 13 44 1 33 44 1i 33
2 i 80 1 61 44 1 33 87 1 61 i
i
3 it 140 i 106 65 I 46 131 i 94 i
I I
4 i I
87 1 61
I I
108 i 74
1 I
5 I
i
131 ', 94 l I
6 1
7 I 1 ls2 ', 107 1
I !' I I

Fig. 2

4. Elimination of the Unutilized States

Several states defined through the application, at a given stage, of procedures


P1 and P2 are never utilized in the following stages; therefore it is worthwhile
to eliminate such states. The following rules can be applied for the elimination
of the unutilized states.
a) I f a state, defined at the m-th stage, has a total weight L such that

L < A m = m a x { W - i = m~+ l w/,1},

this state will be never utilized in the stages followin9 the m-th One.
b) If a state, defined at the m-th stage, has a total weight L such that
B m= W - min {wi} < L < W,
m<i<n

the state will be never utilized in the stages following the m-th one.
The following changes can be made to Procedure PI:
Replace Steps 2. and 5. with
Set z = rain {v - 1, Bin}.
Replace Step 6. with
If z < max {w,,, Am}, return.
3*
36 P. Toth :

After execution of Procedure P2, a state j satisfying one of the conditions:


i) L21<A m and L2j+ 1_<Am;
ii) Bm<L2j<W;
can be eliminated. The set R m of the remaining states at stage m is therefore:
R., = {sin} w {j I r,~<<_j<--qm}
where
r m such that L2rm_ 1 <Am, L 2 r m > A ~ (with L 2 o = 0 ) ;
q m = m a x ~I L2j<_B~, j<s~}.
In the following, the modified versions of procedures P1 and P 2 will be called,
respectively, Procedure P l . a and Procedure P2.a.
The a b o v e rules a) and b) cannot obviously be applied to the H o r o w i t z and
Sahni algorithm, since procedure P 3 requires all the u n d o m i n a t e d states relative
to the two subproblems.

Example:
F o r the example previously considered, Fig. 3 gives the results obtained by
elimination of the unutilized states. The total n u m b e r of states is 12.

m= 1 m=2 m =3 m=4 m =5

j AI= 1 Az= 39 A3= 83 A 4 = 170 A5=191


B 1=170 B2=170 B3=170 B4 = 170 B5= 191
I I
Lj [ Fj Fj L~ ~ Fj Lj ] Fj
I I

80 I 61 60 ! 45 8o I 61 1671 122 184 q 139


P
80 1 61 1041 78 1841 139
i
140 I 106 124 i 94 i I

1I I
140 1 106 [
I
I
t
I 1I 184 i 139
I
i ,,
Fig. 3

5. A F a t h o m i n g Criterion

Recently M o r i n and Marsten [10] p r o p o s e d the utilization of bounds inserted


in the d y n a m i c p r o g r a m m i n g procedures, in order to fathom the states not
leading to optimal solutions. Such an a p p r o a c h is here applied to the Uni-
dimensional Z e r o - O n e K n a p s a c k Problem.
Let us consider, at the m-th stage (1 < r e < n ) , a lower bound LB m of the o p t i m a l
solution to the original p r o b l e m ; this b o u n d can be given by:
LBm = m a x {Fsm+ fl, LBm_I}
Dynamic Programming Algorithms for the Zero-OneKnapsackProblem 37

where fl is a lower bound of the solution to the subproblem defined by

maximize ~ pix~l ~ w~x~<W-L2~,,;


i=m+l i=m+l

xi=O, 1, for i = m + l , ...,n.


The value fl can be obtained by means of a heuristic procedure, for example
the greedy solution investigated by Magazine, Nemhauser and Trotter in [7].
Let us consider, for any state j defined at the m-th stage and having total weight
L~ and total profit Fj, an upper bound UB,,,j of the solution to the subproblem
defined by

maximize ~ pixi] ~ wixi<_W-Lj;


i=m+l i=m+ l

xi=0, 1, for i = m + l , ...,n.


FATHOMING CRITERION. If the condition
Fj+ UBm,j<_LB m
holds, then state j can be fathomed.
The value UB,,,j can be obtained through application of the following Theorem
proved in [8-1.

Theorem 1: Assume
Pl/wl ~ p 2 / w 2 ~ ... >__pn/Wn

and let
l
l = largest integerfor which ~ wi <_W;
i=1

B2

Then
UB =max {B1, Bz}
is an upper bound of the solution to the problem given by (1), (2) and (3).
Obviously the efficiency of the fathoming criterion tends to increase when high
values of m are considered, since, as m grows, the lower bound LBm gives a better
approximation to the maxfinum profit while at the same time, because of the
ordering of the variables according to decreasing values of the ratios p.]wi,
the upper bounds tend to decrease. Besides, at the same stage m, the states
having high values of the total weight are fathomed more easily than those
with low total weights, because the sum (total profit)+ (upper bound) generally
38 P. Toth :

decreases as the total weight grows. In order to reduce the computing time
required for the evaluation of the upper bounds corresponding to all the
states defined at the m-th stage (1 < m < n), the following procedure can be utilized
after execution of Procedure P 2.a.

P R O C E D U R E P4

[Computation of the lower bound LB~]


1. Set i = m + l , LB=F2~,,, C=W-L2~m. If C<W-Bm, go to 4.
2. If w~>C, go to 3;
otherwise, set LB=LB+p,, C=C-W~; if C<W-B~, go to 4.
3. If i<n, set i = i + 1 and go to 2.
4. Set LB~=max {LB, LBm_I}.
[Computation of the upper bounds U B]
5. Set j=q,,, i = m + l , Q = D = 0 , w,+l=wn+2=oo, p,+l=p,+2=O.
6. Set C = W - L 2 s.
7. If C<_Q+%, go to 8;
otherwise, set Q= Q.+w~, D=D+p~, i=i+ 1 and repeat 7.
8. Set UB=D+max {[(C-Q)p~+x/w~+lJ, [p~-(w~-(C-Q))p~_j/w~_l]}.
[Fathomin9 test]
9. If F2s+UB<_LBm, fathom state j.
In any case, i f j = r ~ , return; otherwise, set j = j - 1 and go to 6.

In the following the algorithm resulting from application of procedures P2.a


and P4 will be referred to as algorithm DP.

It must be noted that the fathoming criterion cannot be inserted in procedure


Pl.a, because this procedure requires, at each stage, all the states having
total weights in a given interval. In addition, for what concerns the Horowitz
and Sahni algorithm, it is worthwhile to point out that, when applied to most
of the stages of the subproblem relative to the last (n-In/2]) variables, the
fathoming criterion diminishes greatly in efficiency, both because the lower
bounds LB~ are generally not increased with respect to the last stages of the first
subproblem, and because only states with low values of the total weight are
considered.

Example:
The application of the fathoming criterion to the example previously considered
gives the results shown in Fig. 4. An asterisk indicates the fathomed states.
The total number of states is 8.
Dynamic ProgrammingAlgorithms for the Zero-OneKnapsack Problem 39

m=l m=2 m=3 m=4 m=5

At= 1 A2= 39 A3= 83 A4.= 191 A5=191


B~= 170 B2 = 170 B3= 170 170
B4 = B5=191
LB2= 139 LB3= 139 LB, = 139
j Lj; ~j Lj I Lj l UBj
I I
L~ e~ UB~ tj I, r~ uB; Li Fj
I
I
I I
1 80 I 61 60 451 94* 80 61 74* 140 ~ 106 13" 184 139
2 J 80 61 I 79
/
124 94 46 1841 139 --
I
3 I 140 1061 -- 140 106 34 I
4 l 184 139 I
[ I
Fig. 4

6. A N e w D y n a m i c P r o g r a m m i n g A l g o r i t h m

The techniques presented in Sections 4 and 5, when applied to the states


resulting from the utilization of procedure P2 (algorithm DP), greatly reduce
the number of such states and, consequently, the computing times and the
storage requirements. It must be noted however that, for what concerns the
calculation of a single state, the computing times and storage requirements of
algorithm DP are greater than those corresponding to the utilization of proce-
dure Pl.a. In fact, with regard to the storage requirements, algorithm DP needs
3 words (total weight, total profit and partial solution) for each state of the
current stage and of the previous one; on the contrary, procedure P l . a needs
only 2 words (total profit and partial solution) for each state of the current
stage. Besides, with regard to the computing times, it clearly appears, from
the detailed steps of the procedures, that the number of operations required
for the calculation of a single state in algorithm DP is greater than the
corresponding number in procedure P l.a.
From these considerations it follows that the utilization of algorithm D P is
worthwhile only in those cases where the number of states generated at each
stage by this method is much less than the number of states generated by pro-
cedure Pl.a. However, when "hard" problems are considered, the number of
states generated by algorithm DP tends to be almost equal to the number of
states generated by procedure P l . a ; therefore, for such kinds of problem, the
latter procedure must be preferred regarding both computing times and storage
requirements.
A new dynamic programming algorithm which solves efficiently both easy and
hard problems, can be obtained combining the best characteristics of the two
approaches. This can be achieved by utilizing algorithm D P as long as the
number of the generated states is low, and then by utilizing procedure Pl.a.
The stage when it is worthwhile to change the procedures can be obtained auto-
matically, during execution of the algorithm, by means of the following heuristic
rule.
For the current stage, say m (m < n), let us define:

NSlm=min{i=~wi, B m + l } - A m + l =
40 P. Toth:

= number of states at stage m when procedure P l.a is utilized;


NS?-m =current value of sm=
=number of states at stage m when algorithm DP is utilized;
R =(average computing time of one state utilizing algorithm D P)/(average
computing time of one state utilizing procedure P 1.a.).
I f the condition:
NS2m>NSlm/R
holds, then at stage m it is worthwhile to pass from algorithm DP to procedure
Pl.a.
It must be noted that the computation of N S 1 m requires no extra time, because
it is independent of the states currently defined at stage m.
The following procedure can be utilized for the change-over between the two
methods. The meaning of the variables is the same as in the previous sections;
the procedure starts after the execution of the m-th stage of algorithm DP.
PROCEDURE P 5
1. Set v= L 2sm, Fv=-F2sm, X v = X 2sm, j = s m - 1 , z =min { v - l, Bm}.
2. If j = 0 , go to 4.
3. If z_>L2j, set F z = F 2 ~ X ~ = X 2 j , z = z - 1 and repeat 3;
otherwise, set j = j - 1 and go to 2.
4. If z<_Am-1, return;
otherwise, set F ~ = X . = 0 , z = z - 1 and repeat 4.
It is worthwhile to point out that, from a computational point of view, it
is possible to store vectors (F~) and (X~) in the same core locations as vectors
(F2j) and (X2j), because in procedure P5 the value of j is always less than
or equal to z. In addition, it must be noted that the technique presented in
this section cannot obviously be applied with good results to the Horowitz
and Sahni algorithm.
A further improvement to the new algorithm can be obtained through appli-
cation of Theorem 1. In fact it~ at the m-th stage, F2sm (or F~) is equal to the
upper bound of the solution to the original problem computed according to
Theorem 1, the algorithm can be stopped.

7. Transformation of the Problem

The solution to the knapsack problem defined by (1), (2), (3), (4) and (5) can
be given by:
P=~ pi-P
i=1
Dynamic ProgrammingAlgorithms for the Zero-One Knapsack Problem 41

where P is the solution to the knapsack problem:

minimize 15 = ~ Pi xi (8)
i=1

subject to
• w i 2 i>_ W = ~
n
wi- W (9)
i=i i=1

97~= 1 - x ~ = 0 , 1 (i= 1,..., n) (10)


Because of the assumptions made for the original knapsack problem (1), (2),
(3), (4) and (5), the transformed problem (8), (9) and (10) is feasible and non-
trivial.
It can easily be seen that the efficiency of the dynamic programming algorithms
presented in the previous sections greatly depends on the value of W, so the following
transformation rule can be employed in order to decrease the computing times
and the storage requirements of such algorithms:
If W> W, solve the transformed problem (8), (9) and (10) instead of the original
problem (1), (2) and (3).
It is worthwhile to apply the previous rule to the problem obtained after
execution of the reduction procedure mentioned in section 1, because the
values of W and W for the reduced problem are generally different from the
original ones.
Obviously all the procedures previously presented can easily be modified for
the solution of the transformed problem; in what follows the modified proce-
dures will be called the same as the original ones.

Example:
For the example previously considered, F i g . 5 gives the results corresponding
to the application of procedure P2 to the transformed problem ( W = 2 9 2 - 1 9 1 =
= 101 < W - 191). The total number of states is 23.

m=l m=2 m=3 m=4 m=5


i=5 i=4 i=3 i=2 i=1
I I I I
Lj II Fj L; I Vj Lj I Fj Fj I Fj
r I I
I I
21 I 13 21
i
i 13 21 13 13 21 1i 13
I
I 87 I 61 44 , 33 44 [ 33 44 1 33
I
[ lo, I 74 65 [ 46 60 1 45 6o I 45
I
t I 87 q 61 65 ] 46 65 1 46
108 I[ 74 81 58 81 i 58
1I
I
87 1 61 87 1 61
L
I
1 I 108 ~
I
74 to81 74
I I I

Fig. 5
42 P. T o t h :

s
i

~1 ~

~ll ~

ii I

+
VI vl vq t
VI ,
v, ~_
~ o~.~
- ~
VL ~,
D y n a m i c P r o g r a m m i n g Algorithms for the Zero-One K n a p s a c k Problem 43

t",l

..e
~5
~'q tr t',l

eq~

xzt

tt'~ ~ ~ ~.~ ~ tr t'-.I ee~ ~,~ tr .--, ~ "~

e-i

.,-.-t

~ t'N

r~
+ +
r163 9 " .. t"q

II II o

oo v-

,z
44 P. T o t h :

8. Computational Results
The performance of the new algorithms proposed in the previous sections
has been compared with that of the most efficient branch and bound and
dynamic programming methods. The following algorithms have been considered:
BBHS: branch and bound algorithm of Horowitz and Sahni [4] ;
BBMT: branch and bound algorithm of Martello and Toth [8];
DPHS: dynamic programming algorithm of Horowitz and Sahni [4];
DPTI: algorithm of section 6 not utilizing the fathoming criterion (with R =4);
DPT2: algorithm of section 6 utilizing the fathoming criterion (with R=6);
DPT3: algorithm DPT2 applied to the problem obtained according to the
transformation rule of section 7.
All the algorithms have been coded in FORTRAN IV and run on a CDC 6600
computer after execution of the reduction procedure presented in [12].
Several uniformly random data sets have been considered to compare the
efficiency of the above-mentioned algorithms. The data sets are described in
Table 1; for each of them three values of n have been considered (n= 50, 100,
200). The columns of Table 1 give the average times and, in parentheses, the
maximum times relative to the six algorithms; all the times are expressed in
milliseconds and include the times required for the sorting of the variables and
executing the reduction procedure. Each value given in Table 1 has been
computed over 200 problems. Whenever the time-limit assigned to each data
set (260 seconds) was not sufficient to solve all the 600 problems, the average
and maximum times are given only if the number of solved problems is
significant; otherwise, only this number is given.
The results given in Table 1 show that for "not hard" problems (data sets
A, B, C, 1, J, K, L), the branch and bound algorithms, and mainly BBMT, are
more efficient than the dynamic programming methods. On the contrary, when
"hard" problems are considered (data sets D, E, F, G, H) the dynamic programming
procedures become much faster than the branch and bound algorithms.
Of the dynamic programming methods, the most efficient are clearly DPT1
and DPT2. The utilization of the fathoming criterion (algorithm DPT2) leads
to an improvement for data sets J and K and above all for data sets G and H;
on the contrary for data sets D and E algorithm DPT1 is better than DPT2.
The excellent performance of DPT2 in solving data sets G and H probably
depends on the fact that the elements having largest values of wi are consi-
dered in the first stages, so good values of the lower bounds L B m are early
obtained; in addition the last stages, considering the elements having the
smallest values of wi, have high values of A,~, so only states with high total
weights are present and, consequently, the corresponding upper bounds are
generally low.
The bad performance of the dynamic programming methods for data sets
J and K probably depends on the fact that the reduction procedure does
Dynamic Programming Algorithms for the Zero-One Knapsack Problem 45

not work well with such kinds of data sets, as has been shown in [12], and
so problems with a high number of variables are to be solved. The branch
and bound algorithms are not affected by this phenomenon, because their
performance depends on the number of the variables remaining after execution
of the reduction procedure, much less than the performance of the dynamic
programming methods, as has been shown in [9].
The application of the transformation rule of section 7 (algorithm DPT3)
gives an improvement for data sets, A, B, E, H, J and K where, after execution
of the reduction procedure, the value of W is either generally (data sets E and
H) or occasionally (data sets A, B, J, and K) greater than the value of g/.

References

[1] Ahrens, J. H., Finke, G.: Merging and sorting applied to the zero-one Knapsack problem.
Operations Research 23 (1975).
[2] Barr, R. S., Ross, G. T. : A linked list data structure for a binary Knapsack algorithm. Research
Report CCS 232, Centre for Cybernetic Studies, University of Texas (1975).
[3] Greenberg, H., Hegerich, R. L. : A branch search algorithm for the Knapsack problem.
Management Science 16 (1970).
[4] Horowitz, E., Sahni, S. : Computing partitions with applications to the Knapsack problem.
J. ACM 21 (1974).
[5] Ingargiola, G. P., Korsh, J. F.: A reduction algorithm for zero-one single Knapsack problems.
Management Science 20 (1973).
[6] Kolesar, P. J.: A branch and bound algorithm for the Knapsack problem. Management
Science 13 (1967).
[7] Magazine, M., Nemhauser, G., Trotter, L. : When the greedy solution solves a class of Knapsack
problems. Operations Research 23 (1975).
[8] Martello, S., Toth, P.: An upper bound for ther zero-one Knapsack problem and a branch
and bound algorithm. European Journal of Operational Research 1 (1977).
[9] Martello, S., Toth, P.: The 0-1 Knapsack problem, in: Combinatorial optimization (Christo-
tides, N., Mingozzi, A., Sandi, C., Toth, P., eds.). London: J. Wiley 1979.
[10] Morin, T. L., Marsten, R. E.: Branch and bound strategies for dynamic programming. Operations
Research 24 (1976).
[111 Nauss, R. M. : An efficient algorithm for the 0-I Knapsack problem. Management Science 23
(1976).
[121 Toth, P. : A new reduction algorithm for 0-1 Knapsack problems. Presented at ORSA/TIMS
Joint National Meeting, Miami (November 1976).
[13] Zoltners, A. A.: A direct descent binary Knapsack algorithm. J. ACM 25 (1978).

Prof. Dr. P. Toth


Istituto di Automatica
University of Bologna
Viale Risorgimento, 2
1-40136 Bologna
Italy

S-ar putea să vă placă și