Sunteți pe pagina 1din 13

Annals of Operations Research 103, 135147, 2001

2001 Kluwer Academic Publishers. Manufactured in The Netherlands.


A Combined Algorithm for Fractional Programming

JIANMING SHI j.shi@ms.kuki.sut.ac.jp


School of Management, Science University of Tokyo, 500 Shimokyouku, Kuki, Saitama 346-8512, Japan
Abstract. In this paper, we present an outer approximation algorithm for solving the following problem:
max
xS
{f(x)/g(x)}, where f (x) 0 and g(x) > 0 are d.c. (difference of convex) functions over a
convex compact subset S of R
n
. Let () = max
xS
(f (x) g(x)), then the problem is equivalent to
nding out a solution of the equation () = 0. Though the monotonicity of () is well known, it is
very time-consuming to solve the previous equation, because that maximizing (f (x) g(x)) is very hard
due to that maximizing a convex function over a convex set is NP-hard. To avoid such tactics, we give
a transformation under which both the objective and the feasible region turn to be d.c. After discussing
some properties, we propose a global optimization approach to nd an optimal solution for the encountered
problem.
Keywords: fractional programming, cutting plane, global optimization
AMS subject classication: primary 90C32, 90C30, secondary 65K05
1. Introduction
Many optimization problems arising from engineering, economics, management science
and others are dened as a ratio of functions, i.e., a fractional programming problem.
Due to its importance both in theory and in applications, the method and theory in frac-
tional programming has received a lot of attention in the past three decades. One can
nd the details in, for instance, [4,17,20] and the bibliographies therein. In this paper,
we consider the following problem:
(P) max
f (x)
g(x)
(1.1)
s.t. x S,
where 0 f : R
n
R and 0 < g : R
n
R are d.c. (difference of convex) functions,
S is a convex, compact, nonempty subset of R
n
. It is well known that every function in
C
2
is d.c. [10], then a class of quite wide problems in fractional programming can be
reduced to (P).
By the denition of d.c. function, we suppose that f := f
1
f
2
and g := g
1
g
2
for some convex functions f
i
: R
n
R and g
i
: R
n
R with i = 1, 2. If f
2
0 and
g
2
0, then both f and g turn to be convex. Moreover, the problem (P) with f
1
0

This work was partially supported by grant-in-aid for Encouragement of Young Scientists (A) 09780415
of the Ministry of Education, Science, Culture and Sports of Japan.
136 SHI
and g
2
0 is called to be a concave single-ratio fractional (CSF) programming problem.
Though there are very rich publications of studying on fractional programming, most of
them were concentrated on a CSF case, especially on a linear case. The encountered
problem (P) is in the category of nonconcave single-ratio case.
A well-known strategy for solving a single-ratio problem is so-called parametric
method which is based on nding a solution for the following equation:
() := max
xS
_
f (x) g(x)
_
.
The method comes from the fact that

is the optimal value of (P) if and only if


(

) = 0. It is easy to see that the function () has some properties:


(1) () is continuous;
(2) () is d.c. and nonincreasing;
(3) there exist two positive points

and
+
such that (

) 0 and (
+
) 0.
With the properties above, one can design various algorithms for solving the equa-
tion of () = 0, such as a bisection method, Newton-like method under some con-
ditions. For a given , the calculatability of () is a precondition for exploiting the
method. In the case of a concave single-ratio problem, i.e., f
1
0 and g
2
0, the
calculatability of

is clear because that f (x)g(x) keeps its advantage of being con-


cave for a nonnegative , then calculating () is equivalent to maximizing a concave
function over a convex set. Many researchers have obtained a lot of results by means of
this method (see, e.g., [6,11,13,19]). Detailed results regarding with a quadratic case can
be found in [12,18], for instance. In our case, the function f (x) g(x) may violate its
concavity even for a positive . Therefore, it is hard to calculate the value of () for a
given . In that sense, a parametric method seems to be a time-consuming strategy.
In their paper [2], CharnesCooper introduced an epi-multiple transformation for
solving a linear fractional programming problem. Schaible developed it to solve a con-
cave single-ratio case [15,16]. Under the transformation, an objective function of a con-
cave single-ratio fractional programming turns out to be concave, the feasible region of
the problem keeps its convexity simultaneously. Therefore, a concave fractional pro-
gramming is transferred into a concave programming which is solvable by concave pro-
gramming methods. An algorithm to be proposed in this paper is based on the epi-
multiple transformation as well. After converting the problem (P) into a d.c. maximiza-
tion, we investigate some properties of the converted d.c. problem. Employing the prop-
erties, the d.c. problem is represented as a maximization of a convex function over a
convex set with a reverse constraint. We propose a combined algorithm underlying an
outer approximation and a branch-and-bound method.
This paper is organized as follows. Section 2 presents a scheme for transforming
the problem (P) to a special d.c. programming. Section 3 describes an outer approxi-
mation algorithm and gives its validity and convergence. Section 4 reports a result of
a numerical experiment for the proposed algorithms. A brief conclusion is provided in
section 5, which outlines some possible extensions as well.
COMBINED ALGORITHM 137
2. Solution method
Throughout this paper, for the sake of convenience for our discussion, we suppose that
the feasible region S of the problem (P) is dened by a convex function h, i.e., S :=
{x R
n
| h(x) 0}.
2.1. Equivalent transformation of the problem
To transform the problem (P) into a d.c. programming, we consider an epi-multiple
function F(x, ) which is dened by
F(x, ) :=
_
f (
1
x) if > 0,
0 if = 0, x = 0,
otherwise.
(2.1)
For g and h, we similarly dene G and H. The function F holds many interesting
properties which play an important role in optimization. One can nd the details, e.g.,
in [14].
Although f (x)/g(x) in (1.1) is d.c. if f (x)/g(x) C
2
( see, e.g., [8]), without
an explicit representation, the advantages of properties of a d.c. function are hard to be
exploited in view of computational respects. The intention of the transformation (2.1) is
to cast the problem (P) into an equivalent problem with an explicit d.c. representation.
Lemma 2.1 [14]. If f is a convex (concave) function, then so is the function F for
> 0.
Proof. Let (x, ) = t (x
1
,
1
) +(1t )(x
2
,
2
) for t (0, 1), x
i
R
n
,
i
> 0, i = 1, 2.
Note that f is convex. Then one has
F(x, ) =F
_
t (x
1
,
1
) +(1 t )(x
2
,
2
)
_
=
_
t
1
+(1 t )
2
_
f
_
t x
1
t
1
+(1 t )
2
+
(1 t )x
2
t
1
+(1 t )
2
_
=
_
t
1
+(1 t )
2
_
f
_
t
1
t
1
+(1 t )
2
x
1

1
+
(1 t )
2
t
1
+(1 t )
2
x
2

2
_
t
1
f
_

1
1
x
1
_
+(1 t )
2
f
_

1
2
x
2
_
=t F(x
1
,
1
) +(1 t )F(x
2
,
2
).
If f is concave, then similarly one have
F
_
t (x
1
,
1
) +(1 t )(x
2
,
2
)
_
t F(x
1
,
1
) +(1 t )F(x
2
,
2
).
This conrms the assertion.
Now we focus on reducing (P) to a d.c. maximization. Let
(x) :=
1
g(x)
and y(x) := x(x). (2.2)
138 SHI
Note that g(x) > 0 for all x S, then (x) > 0 for all x S. Given a point x S, by
(2.2) we obtain a point ( y,

) R
n+1
, where y = y(x) and

= (x ). Denote
S
H
:=
_
(y, ) R
n+1
| x S such that =
1
g(x)
, y = x
_
.
For each (y, ) S
H
, we see that
F(y, ) =f
_
y

_
= f (x) =
f (x)
g(x)
,
(2.3)
G(y, ) =g
_
y

_
= g(x) = 1.
Denote =: 1/max{g(x) | x S}, =: 1/min{g(x) | x S}. Then we see that
S
H

_
(y, ) R
n+1
| H(y, ) 0,
_

_
(y, ) | G(y, ) 1 0
_
.
Now we consider the following d.c. programming problem:
(P
d.c.
) max F(y, ) (2.4)
s.t. G(y, ) 1 0,
H(y, ) 0,
.
From (2.3) and compared with the feasible regions of (P) and (P
d.c.
), it is easy to see
that max(P) max(P
d.c.
). Moreover, the following theorem shows that (P) is equivalent
to (P
d.c.
).
Theorem 2.2. If (y

) is an optimal solution of (P
d.c.
) then y

is an optimal solu-
tion of (P). If x

is an optimal solution of (P), then (x

/g(x

), 1/g(x

)) is an optimal
solution of (P
d.c.
).
Proof. Suppose that (y

) is an optimal solution of the problem (P


d.c.
). Then (y

)
is a feasible solution of the problem (P
d.c.
). It means that

> 0 and

h(y

)
0 due to H(y

) 0. We have that y

is a feasible solution of the problem (P).


The maximality of y

on S follows from the assumption that (y

) is an optimal
solution. In fact, from the assumption we see that
f
_
y

_
= F(y, ) F
_
y

_
=

f
_
y

_
(2.5)
for all (y, ) in the feasible region of (P
d.c.
). Suppose that y

is not an optimal
solution of (P), then there exists x S such that f (x )/g(x ) > f (y

)/g(y

).
COMBINED ALGORITHM 139
Let

= 1/g(x ) and y =

x. Then ( y,

) is a feasible point of (P
d.c.
). On the other
hand, we see that 0 < G(y

) 1 and that

f
_
y

_
=
f (x )
g(x )
>
f (y

)
g(y

)
=

f (y

g(y

)
=

f (y

)
G(y

f
_
y

_
.
It contradicts (2.5).
Now, we are going to the second assertion. Suppose that x

is an optimal solu-
tion of (P). Then, obviously, f (x

)/g(x

) f (x)/g(x) for all x S and (x

/g(x

),
1/g(x

)) is a feasible point of (P
d.c.
). If (x

/g(x

), 1/g(x

)) is not an optimal solu-


tion of (P
d.c.
), then there exists a feasible point ( y,

) of (P
d.c.
) such that

f ( y/

) >
f (x

)/g(x

). From the feasibility of ( y,



), we see that H( y,

) 0 and G( y,

)
1. It follows that y/

S from h( y/

) 0, and that f ( y/

)/g( y/

)

f ( y/

)
from 1/g( y/

)

. Therefore f ( y/

)/g( y/

)

f ( y/

) > f (x

)/g(x

). It
contradicts that x

is an optimal solution of (P).


Remark. From the above proof, we see that the constraint of (P
d.c.
) can be
simply replaced by > 0.
From theorem 2.2, we see that (P
d.c.
) is equivalent to (P). To solve (P
d.c.
) efciently
we investigate some of its properties.
A function : R
n
R is positively homogeneous if 0 dom() and (y) =
(y) for all y and all > 0.
Lemma 2.3. F is positively homogeneous.
Proof. The equation F(0, 0) = 0 implies that (0, 0) dom(F). For any (y, ) R
n+1
and any > 0, we see that
F
_
(y, )
_
= f
_
y

_
= f
_
y

_
= F(y, ).

From the above lemma, we see that if


2
>
1
> 0, then F(
2
(y, )) >
F(
1
(y, )) if F(y, ) = 0. Denoted by F
S
the feasible region of (P
d.c.
) and bd(F
S
)
the boundary of F
S
.
Theorem 2.4. If (y

) is an optimal solution of (P
d.c.
), then (y

) bd(F
S
).
Proof. Suppose that (y

) is an optimal solution and that (y

) / bd(F
S
). Then
there exists a neighborhood N

(y

) with a radius of > 0 such that N

(y

)
F
S
. Therefore, (1+
0
)(y

) N

(y

) F
S
, where
0
= /(2||(y

)||). From
lemma 2.3, we see that F((1 +
0
)(y

)) > F(y

). This is a contradiction of the


maximality of (y

).
Similarly, we have the following corollary.
140 SHI
Corollary 2.5. If (y

) is an optimal solution of (P
d.c.
), then G(y

) = 1.
Proof. Suppose that (y

) is an optimal solution and that G(y

) < 1. It follows
from the denition of that

< . Note that G and H are also positively homoge-


neous. We see that there exists a point (y

) with > 1 such that G(y

) 1,
H(y

) 0 and

. It is a contradiction of the maximality of (y

) due to
F(y

) > F(y

).
From lemma 2.1 we see that the function H in (2.4) is convex and that both
the functions F and G are d.c. Indeed, denote F
1
(y, ) := f
1
(y/), F
2
(y, ) :=
f
2
(y/), then both F
1
and F
2
are convex, and F(y, ) = F
1
(y, ) F
2
(y, ).
Similarly, denote G
1
(y, ) := g
1
(y/), G
2
(y, ) := g
2
(y/), then G(y, ) =
G
1
(y, ) G
2
(y, ). Hence (P
d.c.
) is rewritten as
(P
d.c.
) max F
1
(y, ) F
2
(y, ) (2.6)
s.t. G
1
(y, ) G
2
(y, ) 1 0,
H(y, ) 0,
.
By introducing two additional variables and in (2.6), one can rewrite (P
d.c.
) as the
following problem:
(P
main
) max F
1
(y, ) (2.7)
s.t. F
2
(y, ) 0,
G
1
(y, ) 1 0,
G
2
(y, ) 0,
H(y, ) 0,
.
Moreover, we denote that
F
S
2
:=
_
(y, , , ) R
n+3
| F
2
(y, ) 0
_
,
G
S
1
:=
_
(y, , , ) R
n+3
| G
1
(y, ) 1 0
_
,
G
S
2
:=
_
(y, , , ) R
n+3
| G
2
(y, ) < 0
_
,
H
S
:=
_
(y, , , ) R
n+3
| H(y, ) 0,
_
.
(2.8)
Then the feasible region of the problem (P
main
) is the set (F
S
2
G
S
1
H
S
) \ G
S
2
. The
constraint (y, , , ) / G
S
2
is usually called a reverse convex constraint.
To establish an outer approximation for (P
main
), we need the following assump-
tions:
(i) int(
S
) = , where
S
:= (F
S
2
G
S
1
H
S
) \ G
S
2
, the feasible region of (P
main
);
(ii) (y
0
,
0
,
0
,
0
) int(G
S
2
F
S
2
G
S
1
H
S
) is available.
COMBINED ALGORITHM 141
Assumption (i) is a natural one for an optimal algorithm. Assumption (ii) is needed
for an outer approximation algorithm dealing with a reverse convex constraint.
Denote
S
G
2
:= F
S
2
G
S
1
H
S
. From the convexity of the functions F
2
, G
1
and H,
the set
S
G
2
is convex. Suppose that F

is a convex function such that {(y, , , )


R
n+3
| F

(y, , , ) 0} =
S
G
2
.
2.2. Upper bounds
In our algorithm, a sequence of conical partition sets {T
k
}
kI
with
S
G
2
T
k
is gener-
ated. The sets hold a nested property: T
i+1
T
i
for i I. Let a polytope T
0
with n +3
dimensions be an initial one. We can take a simplex containing
S
G
2
as T
0
. Suppose
that T
k
is in hand at step k in an algorithm to be proposed, we consider the following
problem:
(P
k
) max F(y, ) (2.9)
s.t. (y, , , ) T
k
.
Obviously, the optimal value of the problem (P
k
) is an upper bound of (P
main
). Suppose
that T
k
consists of several polyhedral convex cones C
k
i
, i = 1, . . . , k
q
, having n + 3
edges which emanate from a point (y
0
,
0
,
0
,
0
) of assumption (ii). For brevity, we
denote z
0
:= (y
0
,
0
,
0
,
0
) and omit the subscript of C. Hence we see that there exist
n +4 afnely independent points z
0
, z
1
, . . . , z
n+3
such that
C =
_
z R
n+3

z =
n+3

i=1

i
_
z
i
z
0
_
+z
0
,
i
0
_
.
Without loss of generality, we assume that ||z
i
|| = 1 for i = 1, . . . , n +3. Let

i
:= sup
_
R | z
0
+
_
z
i
z
0
_
G
S
2

S
G
2
_
for i = 1, . . . , n +3 (2.10)
and w
i
:= z
0
+
i
(z
i
z
0
) for i = 1, . . . , n + 3. Moreover, we denote (w
1
z
0
, . . . ,
w
n+3
z
0
) by U and
L
2
:=
_
z R
n+3
| z = z
0
+U, e

1
_
, (2.11)
where = {
1
, . . . ,
n+3
}, e = (1, . . . , 1)

. Note that z
0
, z
1
, . . . , z
n+3
are afnely
independent, therefore U is a nonsingular matrix, and can be written as
L
2
=
_
z R
n+3
| e

U
1
_
z z
0
_
1
_
. (2.12)
Lemma 2.6.
S
C L
2
C.
Proof. It is from the convexity of the set G
S
2
and the denition of L
2
.
142 SHI
Let

0
:= sup
_
R

z
0
+
n+3

i=1
_
z
i
z
0
_

S
G
2
_
, z := z
0
+
0
n+3

i=1
_
z
i
z
0
_
and
(z) := (z z)
_
F

( z)
_
+F

( z), (2.13)
where (F

( z)) is a subgradient of the function F

at z. From the choice of


0
, we see
that the inequality (z) 0 cuts off no point of the feasible region
S
.
Therefore
Lemma 2.7.
S
C (L
2
{z | (z) 0}) C.
Proof. By the denition of F

, we see that
S

S
G
2
and
S
G
2
is a convex set.
Therefore
S
C
S
G
2
C {z | (z) 0} C. Associated with lemma 2.6, we
obtain the assertion.
From lemma 2.7, we can establish an upper bound for the problem (P
main
). Let
u := max
_
F(y, ) | (y, , , )
_
L
2

_
z | (z) 0
__
C
_
. (2.14)
Note that (L
2
{z | (z) 0}) C is a polytope. Therefore the value u is attained at
one of the vertices of (L
2
{z | (z) 0}) C if it is nonempty. If it is empty, let
u := . The vertices can be calculated by ChenHansenJaumards algorithm [3].
A lower bound can be obtained by a line search as follows. Let

i
:= sup
_
| z
0
+
_
z
i
z
0
_

S
_
for i = 1, . . . , n +3. (2.15)
If
i

i
, then both z
0
+
i
(z
i
+z
0
) and z
0
+

i
(z
i
+z
0
) are feasible points of the problem
(P
main
). Then
l := max
_
F(y, ) | (y, , , ) = z
0
+
_
z
i
z
0
_
,

_

i
,

i
_
, i = 1, . . . , n +3
_
(2.16)
serves a lower bound on
S
C. Suppose that l is attained at (y
l
,
l
,
l
,
l
), then by
lemma 2.3 we see that F(y
l
,
l
,
l
,
l
) > F(y
l
,
l
,
l
,
l
) for all > 1. Therefore
l

:= max
_
F
_
y
l
,
l
,
l
,
l
_

_
y
l
,
l
,
l
,
l
_

S
, 1
_
(2.17)
is greater than or equal to l.
3. Algorithm and convergence
Based on the previous discussion, we design an algorithm for solving the problem (P
main
)
as follows. Similar to many existing algorithms on d.c. optimization, we use a cutting
COMBINED ALGORITHM 143
plane method to approximate the feasible region
S
, and use a conical partition to fulll
an exhaustive process.
A general branch-and-bound algorithm (see, e.g., [9,10]) can solve the problem
(P
main
), which however does not make use of the structure of the problem. The main
difference between the existing algorithms and ours is the process of outer approxima-
tion. To approximate the feasible region
S
, the existing algorithms create a polytope
P
S

S
and calculate the vertices set V
P
S of P
S
. Then the algorithms are going to
nd a point V
max
arg max {F(y, ) | (y, , , ) V
P
S }. If V
max
is feasible then
it is an optimal solution. Otherwise, a cutting plane which cuts off no feasible region
but V
max
is established. In our algorithm, a process of an outer approximation for
S
is simultaneously generated with a sequence of the upper bounds u
k
for k = 1, 2, . . .
by (2.14). Therefore it is not necessary to calculate V
max
.
Suppose that T
k
:= {C
k
1
, . . . , C
k
q
} is a conical partition of
S
at step k. For each
C
k
i
T
k
, we obtain an upper bound u
k
i
by (2.14) and a polytope L
k
i
2
{z |
k
i
(z) 0}
C
k
i
which approximates
S
C
k
i
by lemma 2.7. A lower bound l
k
i
can be found
by (2.16) and (2.17), the latter arises from the epi-multiple transformation. Only a line
search is used in both (2.16) and (2.17). After knowing u
k
= min{u
k
i
} and l
k
= max{l
k
i
},
the algorithm to be proposed chooses one cone C
k
i
0
T
k
as a candidate to be divided
into smaller, and repeats the process.
Algorithm BB.
Step 0: set a conical partition T
0
emanating from z
0
such that
S
T
0
; l
0
:= 1;
u
0
:= ; k := 0;
Step 1: for each C
k
i
T
k
, calculate l
k
i
and u
k
i
by (2.15) and (2.13), and obtain the
solutions (y
k
i
,
k
i
,
k
i
,
k
i
) and ( y
k
i
,

k
i
,
k
i
,
k
i
), respectively; set C
k
i
:=
L
k
i
2
{z |
k
i
(z) 0} C
k
i
; if l
k
i
> l
k
for some i then l
k
:= l
k
i
; if u
k
i
< u
k
for
some i then u
k
:= u
k
i
;
Step 2: if u
k
l
k
= 0 then Stop, otherwise M := {C
k
i
| u
k
i
l
k
}; choose C {C
k
i
|
u
k
i
= u
k
}; create a conical partition C of C;
Step 3: T
k+1
:= (M\ C) C; l
k+1
:= l
k
; u
k+1
:= u
k
; k := k +1; goto step 1.
The convergence of the above algorithm is mainly from an exhaustive renement
process. An innite process is called exhaustive if for every strictly nested sequence
{C
k
i
}
k
i
=1,...
satisfying
cone C
k
i
T
k
, C
(k+1)
i
C
k
i
for every k, i;
C
k
i
has n +3 edges z
j
k
i
(j = 1, . . . , n +3) which emanates from z
0
;
there exists a vector z such that lim
k
i

z
j
k
i
= z for all j = 1, . . . , n +3;
|| z|| = 1.
Some methods for constructing an exhaustive process can be found, e.g., in chap-
ter VII of [10].
144 SHI
Theorem 3.1. Suppose that a conical partition generated by algorithm BB is exhaustive.
If algorithm BB does not terminate after a nite number of iterations, then every accu-
mulation point of the sequence ( y
k
,

k
,
k
,
k
) of the algorithm is an optimal solution of
the problem (P
main
).
Proof. Let (y

) be a cluster point of {( y
k
,

k
,
k
,
k
)}
k=1,...
. Assume that
{( y
k
q
,

k
q
,
k
q
,
k
q
) C
k
q
}
k
q
=1,...
is a subsequence of {( y
k
,

k
,
k
,
k
)}
k=1,...
and that
( y
k
q
,

k
q
,
k
q
,
k
q
) (y

) as k
q
. From the assumption that {C
k
i
} is
exhaustive, we see that (y

)
S
G
2
and (y

) / G
S
2
. It implies
that (y

)
S
(cf. [5]). On the other hand, it follows from (2.14) that the
upper bounds u
k
q
are attained at ( y
k
q
,

k
q
,
k
q
,
k
q
). It yields that (y

) is an
optimal solution of (P
main
).
4. Illustrative example and computational results
In this section we report some numerical results for algorithm BB. Now we give an
illustratively small example with n = 1. The problem is following:
max
100(x
2
x +3)
x
4
3x
2
+5
(4.1)
s.t. x
_
x | x
2
16 0
_
.
As shown in gure 1, the problem has two locally optimal solutions. Compared with the
form of (1.1), one can read the above problem by f (x) = x
2
x +3 or
f
1
(x) = x
2
, f
2
(x) = x 3; g(x) =
1
100
_
x
4
3x
2
+5
_
or
g
1
(x) =
1
100
_
x
4
+5
_
, g
2
(x) =
3
100
x
2
; S =
_
x | x
2
16 0
_
.
Figure 1. The problem (4.1) has two locally optimal solutions.
COMBINED ALGORITHM 145
The functions corresponding to the problem (P
main
) are the following:
F
1
(y, ) =
y
2

, F
2
(y, ) = y 3 ,
G
1
(y, ) 1 =
1
100
_
y
4

3
+5
_
1 , G
2
(y, ) =
3
100
y
2

,
H(y, ) =
y
2

16
and = 0.469484, = 36.3636. It is easy to calculate that the values of above functions
at the point (1, 5, 10, 0.5) are F
2
(1, 5)10 = 24, G
1
(1, 5)1(10) = 0.24992,
G
2
(1, 5) (0.5) = 0.506 and H(1, 5) = 79.8, respectively. Therefore this point
(1, 5, 10, 0.5) can serve as z
0
in algorithm BB. Algorithm BB starts at the simplex
T
0
=
_
(300, 300, 300, 300), (200, 300, 300, 300),
(300, 200, 300, 300), (300, 300, 200, 300),
(300, 300, 300, 200)
_
because
S
in assumption (i) is included in T
0
. Then the algorithm nds an optimal
solution (y

) = (47.2896, 35.0794, 152.5277, 1.9125) of the problem


(P
main
). Therefore from theorem 2.2, we obtain the optimal solution for problem (4.1):
x

= y

= 47.2896/35.0794 = 1.34807 and the optimal value is 216.2773.


Now we report some preliminarily numerical results. We coded a program in C
and ran it on a Sun SPARCstation LX at School of Management, Science University of
Tokyo.
To make clear the behavior of algorithm BB designed for the problem (P
main
), we
considered the following data for our experiments: all the functions in (P
main
) were of
polynomial type with a degree being less than or equal to 4. The dimension n was varied
among 2, 3, 4, 5, 6, 7, i.e., n = 2, 3, 4, 5, 6, 7.
Figure 2. Running time vs. dimension of the problem (P
main
).
146 SHI
Figure 2 is the plots of average running time of 5-instance for algorithm BB. The
curve depicts an approximate function of running-time with respect to the dimension n
of the testing problems.
5. Concluding remarks
In this paper, we have mainly discussed a wider class of single-ratio fractional program-
ming problem in where both the denominator and the numerator of an objective function
are d.c. functions. We have proposed an algorithm for solving the problem. The algo-
rithm works by means of a combination of a conical partition and a branch-and-bound
method. The techniques proposed to nd the upper-and-lower bounds should be adjusted
when one solves a practical problem. With some techniques dealing with multiple re-
verse constraints, the epi-transformation is also valid even if the set S of (1.1) is d.c. To
investigate the efciency of the proposed algorithm, a preliminary numerical experiment
was carried out for small size problems. The results indicate that the problems can be
solved in reasonable time if they are fairly small.
A more challenging problem is to optimize a sum of ratios:
max
xS
m

i=1
f
i
(x)
g
i
(x)
,
where 0 f : R
n
R and 0 < g : R
n
R are d.c. functions, the subset S of R
n
is nonempty, compact and convex. When m is greater than 30, there does not exist an
algorithm to solve the problem within reasonable time. Some existing algorithms are
only related to a linear case (see, e.g., [1]), some of them are heuristic, e.g., [7]. The
method proposed in this paper may be extended to the problem directly: every term can
be reduced to a d.c. function by an epi-multiple transform. A signicant shortcoming of
this method is: when m is larger, the transformed problem has still a higher dimension
larger than m. To come over it, some potential subroutines based on an epi-multiple
function should be studied further.
Acknowledgments
The author wishes to thank Professor S. Schaible for his valuable discussions. The author
is also grateful to the referees for their detailed comments and valuable suggestions.
References
[1] A. Cambini, L. Martein and S. Schaible, On maximizing a sum of ratios, Journal of Information and
Optimization Sciences 10 (1989) 6579.
[2] A. Charnes and W.W. Cooper, Programming with linear fractional functionals, Naval Research Lo-
gistics Quarterly 9 (1962) 181186.
COMBINED ALGORITHM 147
[3] P.C. Chen, P. Hansen and B, Jaumard, On-line and off-line vertex enumeration by adjacency lists,
Operations Research Letters 10 (1991) 403409.
[4] B.D. Craven, Fractional Programming, Sigma Series in Applied Mathematics 4 (Heldermann, 1988).
[5] Y. Dai, J. Shi and Y. Yamamoto, Global optimization problem with several reverse convex constraints
and its application to out-of-roundness problem, Journal of the Operations Research Society of Japan
39 (1996) 356371.
[6] W. Dinkelbach, On nonlinear fractional programming, Management Science 13 (1967) 492498.
[7] J.E. Falk and S.W. Palocsay, Optimizing the sum of linear fractional functions, in: Recent Advances
in Global Optimization (Kluwer Academic, 1992) pp. 221258.
[8] P. Hartman, On functions representable as a difference of convex functions, Pacic Journal of Mathe-
matics 9 (1959) 707713.
[9] R. Horst and P.M. Pardalos, Handbook of Global Optimization (Kluwer Academic, 1995).
[10] R. Horst and H. Tuy, Global Optimization: Deterministic Approaches (Springer, 1993).
[11] T. Ibaraki, Parametric approaches to fractional programs, Mathematical Programming 26 (1983) 345
362.
[12] T. Ibaraki, H. Ishii, J. Iwase, T. Hasegawa and H. Mine, Algorithm for quadratic fractional program-
ming problems, Journal of the Operations Research Society of Japan 19 (1976) 174191.
[13] N. Megiddo, Combinatorial optimization with rational objective functions, Mathematics of Opera-
tions Research 4 (1979) 414424.
[14] R.T. Rockafellar and R.J.-R. Wets, Variational Analysis (Springer, 1998).
[15] S. Schaible, Fractional programming: transformations, duality and algorithmic aspects, Technical
Report 73-9, Department of Operations Research, Stanford University.
[16] S. Schaible, Fractional programming I: duality, Management Science 22 (1976) 858867.
[17] S. Schaible, Fractional programming, in: Handbook of Global Optimization (Kluwer Academic,
1995) pp. 495608.
[18] K. Sekitani, J. Shi and Y. Yamamoto, General fractional programming: minmax convexconvex
quadratic case, in: APORS-Development in Diversity and Harmony (World Scientic, 1995) pp. 505
514.
[19] M. Sniedovich, A new look at fractional programming, Journal of Optimization Theory and Applica-
tions 54 (1987) 113120.
[20] I.M. Stancu-Minasian, Fractional Programming (Kluwer Academic, 1997).

S-ar putea să vă placă și