Recent Advances in Global Optimization

Recent Advances in Global Optimization

Recent Advances in Global Optimization

Recent Advances in Global Optimization

Hardcover

$336.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

This book will present the papers delivered at the first U.S. conference devoted exclusively to global optimization and will thus provide valuable insights into the significant research on the topic that has been emerging during recent years. Held at Princeton University in May 1991, the conference brought together an interdisciplinary group of the most active developers of algorithms for global optimization in order to focus the attention of the mathematical programming community on the unsolved problems and diverse applications of this field. The main subjects addressed at the conference were advances in deterministic and stochastic methods for global optimization, parallel algorithms for global optimization problems, and applications of global optimization. Although global optimization is primarily a mathematical problem, it is relevant to several other disciplines, including computer science, applied mathematics, physical chemistry, molecular biology, statistics, physics, engineering, operations research, communication theory, and economics. Global optimization problems originate from a wide variety of mathematical models of real-world systems. Some of its applications are allocation and location problems and VLSI and data-base design problems.

Product Details

ISBN-13: 9780691631875
Publisher: Princeton University Press
Publication date: 04/19/2016
Series: Princeton Series in Computer Science , #176
Pages: 644
Product dimensions: 6.30(w) x 9.30(h) x 1.90(d)

Read an Excerpt

Recent Advances in Global Optimization


By Christodoulos A. Floudas, Panos M. Pardalos

PRINCETON UNIVERSITY PRESS

Copyright © 1992 Princeton University Press
All rights reserved.
ISBN: 978-0-691-08740-5



CHAPTER 1

On approximation algorithms for concave quadratic programming

Stephen A. Vavasis
Department of Computer Science
Cornell University
Ithaca, NY 14853

June 3,1991


Abstract

We consider ε-approximation schemes for concave quadratic programming. Because the existing definition of ε-approximation for combinatorial optimization problems is inappropriate for nonlinear optimization, we propose a new definition for ε-approximation. We argue that such an approximation can be found in polynomial time for fixed ε and k, where k denotes the number of negative eigenvalues. Our algorithm is polynomial in 1/ε for fixed k, and superexponential in k for fixed ε.


1 Concave quadratic programming

Quadratic programming is a nonlinear optimization problem of the following form:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (1)

In this formulation, x is the n-vector of unknowns. The remaining variables stand for data in the problem instance: H is an n × n symmetric matrix, h is an n-vector, W is an m × n matrix, and b is an m-vector. The relation '≥' in the constraint Wx ≥ b is the usual componentwise inequality.

Quadratic programming, a generalization of linear programming, has applications in economics, planning, and many kinds of engineering design. In addition, more complicated kinds of nonlinear programming problems are often simplified into quadratic programming problems.

No efficient algorithm is known to solve the general case of (1). The lack of an efficient algorithm is not surprising, since quadratic programming is known to be NP-hard, a result due to Sahni [1974]. An NP-hardness proof appears in this paper (it follows from the discussion in Section 3). More recently, Vavasis [1990] showed that the decision version of the problem lies in NP, and hence is NP-complete.

Many avenues for addressing (1) have been pursued in the literature. For example, efficient algorithms are known for the special case in which H is positive semidefinite, known as the convex case. See Kozlov, Tarasov and Hacijan [1979] for the first polynomial-time algorithm for the convex case. See Kapoor and Vaidya [1986] or Ye and Tse [1989] for efficient interior point algorithms for this problem. Active set methods (see Gill, Murray and Wright [1981]), a commonly-used class of methods for(1), are a combination of local search and heuristics.

A very successful way to address NP-hard combinatorial optimization problems has been approximation algorithms. In light of the importance of quadratic programming, it is surprising the ε-approximation algorithms have so far not been pursued. In this report, we investigate ε-approximation algorithms for the concave case, i.e., the case that H is negative semidefinite. Our techniques at present are not able to address the more general indefinite case (in which H has a mixture of positive and negative eigenvalues); see Section 4 for further discussion of the indefinite case. The concave case, like the general case, is NP-hard.

First it is necessary to give a definition of ε-approximation. To our knowledge, this concept has not been previously defined for nonlinear optimization in the literature (but see below), so it is necessary to come up with our own definition. We have the following proposal:


Definition 1Consider an instance of quadratic programming written in the form (1). Let f(x) denote the objective function 1/2xT Hx + hTx. Letx*be an optimum point of the problem. We say thatxois an ε-approximate solution if there exists another feasible pointx#such that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Notice that we may as well take x# in Definition 1 to be the point where the objective function is maximized. Thus, another way to interpret this definition is as follows. Let Π denote the feasible region, and let interval [a, b] be f(Π). Then f(xo) should lie in the interval [a, a + ε(b - a)].

In the case that the objective function has no upper bound on the feasible region, the maximizer is of course undefined. Our definition loses its value in this situation because any feasible point is an ε-approximation for any ε > 0.

The definition fails in the case that the objective function has no lower bound or in the case that no feasible points exist. In these cases, (1) is said to be unbounded or infeasible respectively. We should expect our approximation algorithm to return an indicator that the problem is unbounded or infeasible.

Finally, observe that any feasible point is a 1-approximation by this definition, and only the optimum is a 0-approximation. Thus, the definition makes sense only for ε in the interval (0,1).

This definition has some useful properties. First, it is insensitive to translations or dilations of the objective function. In other words, if the objective function f(x) is replaced by a new function g(x) = af(x) + b where a > 0, a vector xo that was previously an ε-approximation will continue to have that property.

A second useful property is that ε-approximation is preserved under transformations of the feasible region. The most general kind of transformation that preserves the format of (1) is an affine linear transformation. Let θ : Rn -> Rn be an affine linear transformation, i.e., a function of the form θ(x) = V(x-x0) where V is an invertiblen n × n matrix. Then the problem of minimizing f(θ(y)) subject to W θ(y) ≥ b is still a quadratic program, and, moreover, it is not hard to see that if xo is an ε-approximate solution of the original problem, then θ-1(xo) is an ε-approximate solution to the transformed problem.

We now state the main theorem of this paper.


Theorem 2Consider the concave case of (1), i.e., the case that H is negative semidefinite. Let k denote the rank of H. There is an algorithm to find ε-approximate solution to (1) in time

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

steps, in this formula, l denotes the time to solve a linear programming problem of the same size as (1).


We remark that l grows polynomially with the size of the input. This fact was first proved by Hacijan [1979]. The best known asymptotic bound for l is due to Vaidya [1989].

The algorithm we propose is similar to algorithms that have appeared in the literature. In particular, it is very reminiscent of algorithms described in Pardalos and Rosen [1987]. Our contribution is to define a formal meaning for approximation algorithm, and then show that the algorithm achieves this bound.

The remainder of this paper is organized as follows. In Section 2 we provide the algorithm and prove the main theorem. In Section 3 we indicate why polynomial dependence on 1/ε and exponential dependence on k is expected. In Section 4 we discuss open questions raised by this work.

The definition of approximation proposed for combinatorial optimization differs from our definition and is usually stated as follows. A feasible point xo is an ε-approximation if

|f(xo) - f(x*)| [less than or greater than] ε · f(x*)

See, for example, Papadimitriou and Steiglitz [1982] for an extensive discussion of approximation for combinatorial optimization. This definition does not work for nonlinear optimization because it is not preserved when a constant is added to the objective function. In particular, the definition becomes useless in the case that f(x*) [less than or greater than] 0.

Work by Katoh and Ibaraki [1987] addresses the problem of approximation algorithms for certain kinds of quasiconcave optimization problem. Their work generalizes the kinds of allowable objective functions (i.e., they do not restrict attention to quadratic programming), but they make a number of restrictions concerning the feasible region and range of the objective function. Their results do not seem to be directly comparable to ours.


2 Proof of the main theorem

Our starting point is a problem of the form (1) in which H is negative semidefinite with rank k. We can perform change of basis to diagonalize H, either via an eigenvalue computation or an LDLT factorization of H (see Golub and Van Loan [1989]). In either case we end up with a problem of the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)

where, in this new formulation, D is a k × k negative definite diagonal matrix, y is a k-vector of unknowns, and z is an n - k-vector of unknowns. Thus, the negative-definite part of the problem is confined to k variables.

Now we define the function

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

For a fixed choice of y, if the system Ay + Bz ≥ b has no feasible choice for z, then we adopt the convention that φ(y) = ∞. Similarly, in the case that fTz has no lower bound, we say that φ(y) = -∞.

Thus, the original problem can now be expressed simply as minimizing

q/(y) + φ(y) (3)

where

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (4)

In this formulation there are no constraints on y. This is equivalent to (2). In particular, for any y1 feasible for (2), q(y1) + φ(y1) will be the value of the minimum possible objective function value among all feasible vectors of the form (y1, z).


Lemma 3The set C = {y [member of] Rk : φ(y) < ∞} is convex, and on this domain, θ is a convex function.


Proof. First, notice that the set C in the lemma can be written:

C = {y [member of] Rk : There exists z [member of] Rn-k such that Ay + Bz ≥ b.}.

This domain is convex, as the following argument shows. If y', y" [member of] C, then there are z', z" such that Ay' + Bz' ≥ b and Ay" + Bz" ≥ b. This means that for all λ [member of],

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

This shows that (1 - λ)y' + λy" [member of] C.

To show that φ is convex, consider φ(y') and φ(y") for y', y" [member of] C. We know by assumption that φ(y'), φ(y") < ∞. We also assume (see the lemma below) that φ(y'), φ(y") > -∞. Then the problem of computing φ(y') is equivalent to solving a linear program, and we know from linear programming theory that the minimum is achieved, say at a vector z'. In other words, there exists a z' such that φ(y') = fTz' and Ay' + Bz' ≥ b. The same holds for y", i.e., there is a z" such that φ(y") = fT z" and Ay" + Bz" ≥ b. Then we conclude as above that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

so that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

because φ is defined to be the minimum of expressions such as the one on the right-hand side. This last inequality is the same as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

We now show how to determine whether the original problem is unbounded.


Lemma 4Problem (2) is unbounded if and only if

1. For every y [member of] C, φ(y) = - ∞, or

2. Region C is unbounded.


Proof. First, we prove that either of the two conditions imply that (2) is unbounded. The first condition trivially implies that (2) is unbounded. Indeed, the existence of any y such that φ(y) = -∞ means that (2) is unbounded, as we see from examining (3).

Suppose for the other case that C is unbounded. Then it must contain a ray (because it is convex). Let us write the ray in the form {y1 + ty2 : t ≥ > 0} where y2 is a nonzero vector. For large enough t, the function φ(y1 + ty2) must be linear in t. This statement is proved by noting that the solution to the linear program

minimize fTz

subject to Bz ≥ b - Ay

implicit in the definition of φ may be taken to be a basic feasible solution, and for large enough t we can assume that the choice of basis columns is independent of t.

Examining (3), we see that φ behaves linearly along the ray for large enough t, but the term 1/2 (y1 + ty2)T D(y1 + ty2) is quadratic in t with a negative leading term. Thus, the quadratic term dominates for t large enough and the objective function tends to -∞ along the ray.

Now we prove that if the original problem is unbounded, at least one of the two conditions holds. Let {(y1, z1) + t(y2, z2): t ≥ 0} be a ray along which the objective function of (2) tends to -∞. Either y2 or z2 must be nonzero. Suppose y2 is nonzero; this implies that C contains the ray y1 + ty2 and hence is unbounded.


(Continues...)

Excerpted from Recent Advances in Global Optimization by Christodoulos A. Floudas, Panos M. Pardalos. Copyright © 1992 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface

On Approximation Algorithms for Concave Quadratic Programming 3

A New Complexity Result on Minimization of a Quadratic Function with a Sphere Constraint 19

Hamiltonian Cycles, Quadratic Programming, and Ranking of Extreme Points 32

Performance of Local Search in Minimum Concave-Cost Network Flow Problems 50

Solution of the Concave Linear Complementary Problem 76

Global Solvability of Generalized Linear Complementarity Problems and a Related Class of Polynomial Complementarity Problems 102

A Continuous Approach to Compute Upper Bounds in Quadratic Maximization Problems with Integer Constraints 125

A Class of Global Optimization Problems Solvable by Sequential Unconstrained Convex Minimization 141

A New Cutting Plane Algorithm for a Class of Reverse Convex 0-1 Integer Programs 152

Global Optimization of Problems with Polynomial Functions in One Variable 165

One Dimensional Global Optimization Using Linear Lower Bounds 200

Optimizing the Sum of Linear Fractional Functions 221

Minimizing and Maximizing the Product of Linear Fractional Functions 259

Numerical Methods for Global Optimization 274

Integral Global Optimization of Constrained Problems in Functional Spaces with Discontinuous Penalty Functions 298

Rigorous Methods for Global Optimization 321

Global Optimization of Composite Laminates Using Improving Hit and Run 343

Stochastic Minimization of Lipschitz Functions 369

Topographical Global Optimization 384

Lipschitzian Global Optimization: Some Prospective Applications 399

Packet Annealing: A Deterministic Method for Global Minimization, Application to Molecular Conformation 433

Mixed-Integer Linear Programming Reformulations for Some Nonlinear Discrete Design Optimization Problems 478

Mixed-Integer Nonlinear Programming on Generalized Networks 513

Global Minima in Root Finding 543

Homotopy-Continuation Algorithm for Global Optimization 561

Space-Covering Approach and Modified Frank-Wolfe Algorithm for Optimal Nuclear Reactor Reload Design 593

A Global Optimization Approach to Software Testing 616


From the B&N Reads Blog

Customer Reviews