Proceedings of the Princeton Symposium on Mathematical Programming

Proceedings of the Princeton Symposium on Mathematical Programming

by Harold W. Kuhn
Proceedings of the Princeton Symposium on Mathematical Programming

Proceedings of the Princeton Symposium on Mathematical Programming

by Harold W. Kuhn

Hardcover

$336.00 
  • SHIP THIS ITEM
    Qualifies for Free Shipping
  • PICK UP IN STORE
    Check Availability at Nearby Stores

Related collections and offers


Overview

This volume contains thirty-three selected general research papers devoted to the theory and application of the mathematics of constrained optimization, including linear programming and its extensions to convex programming, general nonlinear programming, integer programming, and programming under uncertainty.

Originally published in 1971.

The Princeton Legacy Library uses the latest print-on-demand technology to again make available previously out-of-print books from the distinguished backlist of Princeton University Press. These editions preserve the original texts of these important books while presenting them in durable paperback and hardcover editions. The goal of the Princeton Legacy Library is to vastly increase access to the rich scholarly heritage found in the thousands of books published by Princeton University Press since its founding in 1905.


Product Details

ISBN-13: 9780691647463
Publisher: Princeton University Press
Publication date: 04/19/2016
Series: Princeton Legacy Library , #1547
Pages: 628
Product dimensions: 7.10(w) x 10.10(h) x 1.50(d)

Read an Excerpt

Proceedings of the Princeton Symposium on Mathematical Programming


By Harold W. Kuhn

PRINCETON UNIVERSITY PRESS

Copyright © 1970 Princeton University Press
All rights reserved.
ISBN: 978-0-691-08088-8



CHAPTER 1

TWO METHODS OF DECOMPOSITION FOR LINEAR PROGRAMMING

J. Abadie and M. Sakarovitch

I. Introduction

Consider a linear program having the following structure:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where x is an n1, vector, y is an n2-vector, ..., z is an nt-vector

p is an m1-vector, q is an m2-vector, ..., R is an mt-vector

h is an m-vector

P is an m1 x n1, matrix, Q is an m2 x n2 matrix, ..., R is an mt x nt matrix

A is an m x n1, matrix, B is an m x n2 matrix, ..., C is an m x nt matrix

c is an n1 row-vector, d is an n2 row-vector, ..., e is an nt row-vector.


Such linear programs are frequently met. They may represent, for instance, a decentralized economy where scarce resources have to be shared by otherwise independent units. The principle of decomposition of Dantzig and Wolfe [1] and [2] replaces the solution of such a structured linear program by the successive solution of several linear programs of limited size.

The principle of the methods to be presented here will be sketched, without loss of generality, on the case t = 2:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The initial idea of these methods consists in partitioning the "scarce resource vector" h into a + b = h, and in solving the subproblems:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Clearly there exist "good" partitions of h such that the union of solutions of the two subproblems consitutes a solution of the initial problem. Both methods consist in the search of such a good partition of h.

It is of interest to note here that Dantzig and Wolfe's principle of decomposition was discovered as an extension of a method by Ford and Fulkerson to solve the multicommodity flow problem. Similarly the basic idea of the present methods, or at least of method I, is an extension of an alternative approach to this same multicommodity flow problem.

Notations: If X is a set, |X| will denote the cardinality of this set, i. e., the number of its elements.

Let

N1 = {1, ..., n1}, N2 = {1, ..., n2};

we will have:

I [subset] N1, J [subset] N2.

If a denotes a column vector, c a row vector, and A a matrix:

aI will denote the |I|-vector the components of which are: ai, [for all] i [member of] I

bI will denote the |I|-row-vector the components of which are bi, [for all] i [member of] I

AI will denote the restriction of matrix A to columns the indices of which belong to I.

AJ will denote restriction of matrix A to .rows the indices of which are in J.

AIJ will denote the restriction of matrix A to the matrix of elements Aij for i ε I and j ε J.

(P'(I)) will denote the restriction of problem (P') to its I-column, i. e.:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and similarly for (P"(I)).

(P(I, J)) will denote the restriction of problem (P) to its I [union] J columns, i. e.:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

At each stage of the process, a partition of h into a + b = h is known, and the corresponding optimal solutions of (P') and (P") are computed. I (resp. J) denotes a subset of the set of basic columns in (P') (resp. (P")). Then the process can be decomposed into the two basic steps:

Step 1: Alter, if necessary, the present partition of vector h in such a way that the optimal solution of (P(I, J)) (which is in both cases the restriction of (P) to the set of columns I [union] J) be obtained as the union of the solutions of (P') and (P").

Step 2: If the optimal solution of the restricted problem is not an optimal solution of (P), redefine the restricted problem by introducing into I [union] J a column such that the new restricted problem has a better optimal solution (better in the sense that the objective function decreases).

Remark: For a partition of vector h corresponding to a basic optimal solution of (P), (P') and (P") have together m basic variables which are zero for their optimal solution if (P) is nondegenerate.

This is true because at an optimal basic solution of (P), if no degeneracy occurs, exactly + m1 + m2 variables xi and yj will be positive; but this constitutes also an optimal solution for (P') and (P") which contain together m1 + m2 + 2m rows.

The first method is presented in part I, the second method in part II. These parts are independent and can be read in any order. A third part sketches an economic interpretation of these methods.


FIRST PART: Method I

After some preliminary remarks, an algorithm (algorithm I) will be given; finally, its finite convergence will be proved for a nondegenerate problem (P).

1. Preliminaries: Suppose that, at some step, a certain partition a + b = h has been found, that the subproblems (P') and (P") have been solved; let [bar.x], [bar.y] be these solutions, and let:

I = {i|[bar.x]i > 0}, J = {j|[bar.y]j > 0}.

Lemma 1. If:

(i) [bar.x]I, [bar.y]J are respectively optimal solutions of (P'(I)) and (P"(J)),

(ii) [bar.u], [bar.u]' and [bar.v], [bar.v]' are corresponding optimal simplex multipliers,

(iii) [bar.u]' = [bar.v]',

then [bar.x]I, [bar.y]J is an optimal solution of (P(I, J)).


Proof: Straightforward, applying the usual Kuhn-Tucker optimality condition for linear programming.

The remark at the end of the introduction induces us to think that the closer we will be from a "good" partition of h, the more degenerate the subproblems will be. But if (P') and (P") are degenerate, the value of the optimal simplex multipliers [bar.u]' and [bar.v]' will not be uniquely defined. We construct now a problem (Q) which will give to us optimal simplex multipliers of (P'(I)) and (P"(J2)) such that the distance between the vectors [bar.u]' and [bar.v]' (in a L2norm)2 is minimized:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

Lemma 2. Solving (Q) is equivalent to solving the linear

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

where λ [member of] RI, μ [member of] RJ.

Proof: Straightforward, using the Lagrangian

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

If at some step of the algorithm we solve (S) and find a solution: [bar.λ, [bar.μ], [bar.u]', [bar.v]', [bar.u], [bar.v], two cases may occur:

(1) [bar.u]' = [bar.v]'; then, from Lemma l, [bar.x], [bar.y] is an optimal solution of P(I, J).

(2) [bar.u]' ≠ [bar.v]'; then [bar.λ] 0, [bar.μ] = 0; we define:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (7)

Let [bar.θ] be the supremum of θ such that [bar.x]I λθ ≥ 0, [bar.y]J - μθ [greater than or equal to] 0.

Lemma 3. [bar.x]I - λ[bar.θ], [bar.y]J μ[bar.θ] is a feasible solution to P(I, J), strictly better than [bar.x]I, [bar.y]J for all θ satisfying 0 ≤ θ ≤ [bar.θ], if 0 < [bar. theta]] < + ∞; for all θ satisfying θ > 0 if [bar.θ] = + ∞. In the latter case, the infimum of the objective function in Problem P is - ∞.

Proof : Feasibility is easily checked by substitution into the constraints of (P(I, J)) and use of (1), (2), (3), (4).

Moreove r, ( 5), (1), (3) imply in success ion

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

similarly: J- - - - T

dJ [bar.μ] = [bar.v]' ([bar.v]' - [bar.u]')T.

We have then, for any θ in the range specified in the Lemma:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

The conclusion follows from the assumption [bar.u]' ≠ [bar.v]'.

2. Algorithm I:

Step 0: Let a + b = h be an arbitrary partition, except that the subproblems (P') and (P") have both feasible solutions and are not degenerate.

Solve (P') and (P").

(0.1) If no optimal solution exists for some subproblem, then the minimum value of f is - ∞: terminate.

(0.2) If both subproblems have an optimal solution, then let [bar.x] and [bar.y] be respectively a basic optimal solution for (P') and for (P"). Let I = {i| [bar.x]i. > 0}, J = {j|[bar.y]j > 0}. Go to step 1.

Step 1: Solve s ystem (S). Let [bar.λ], [bar.μ], [bar.u]', [bar.v]', [bar.u], [bar.v] be the solution.

-(1. 1) If [bar.u]' = [bar.v]', go to step 2.

-(1. 2) If [bar.u]' ≠ [bar.v]' define [increment of x] and [increment of y] through (7). Let [bar.θ] be the supremum of θ such that [bar.x] + [increment of x] ≥ 0, [bar.y] + [incrememnt of y] [greater than or equal to] 0.

-(1. 2.1) If [bar.θ] = ∞, the infimum of f in problem (P) is - ∞: terminate.

-(1. 2. 2) If [bar.θ] < ∞ replace [bar.x], [bar.y] by [bar.x] - λ[bar.θ], [bar.y] - μ[bar.θ], and set I = {i|[bar.x]i > 0}, J = {j|[bar.y]j > 0}. End of iteration. Begin next iteration at step 1.

Step 2: Let γ = Mini [not epsilon] I {ci - [bar.u] Pi - [bar.u]'Ai}

δ = = Minj [not epsilon] J {dj - [bar.v] Qj - [bar.v]'Bj}.

- (2.1) If Min{{[gamma, δ} ≥ 0; terminate: [bar.x], [bar.y] is an optimal solution to problem (P).

- (2.2)a) If γ = cs - [bar.u] Ps - [bar.u]' As ≤ δ and γ < 0, then set I = I [union] {s}. End of iteration. Begin next iteration at step (1. 1).


3. Justification of the algorithm:

Lemma 4. [bar.u]' = [bar.v]' in (1. 1) if and only if the system:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

has a solution.

Proof : From Lemma 2, (S) is equivalent to (Q). (Q) has a solution with wmin. 0 if and only if (S ') has a solution.

Lemma 5. In step (1. 2. 2) , one and only one variable xi or yj becomes zero for θ = [bar.θ].

Proof: Obvious, because of the nondegeneracy assumption on problem (P).

Lemma 6. The system (S') either is infeasible or has a unique solution.

Proof: At the very first stage of the algorithm, because of the nondegeneracy as sumption on pr oblems (P') and (P") at step (0), (5) and (6) uniquely dete rmine u, u', v, v'.

Afterwards, either one equation (and just one from Lemma 5) is deleted from the system (S') if it is infeasible, or a nonredundant constraint is added in step 2 if it is feasible.

Lemma 7. If at some nonterminal stage of the algorithm we pass through (1.1), i. e. , if [bar.u]' = [bar.v]', then at the next stage we have [bar.u]' ≠ [bar.v]', i. e., we pas s through (1. 2).

Proof: If at some stage [bar.u]' = [bar.v]', then the system (S') is feasible; in addition the solution is unique from Lemma 6.

At the next stage, after the addition of a nonredundant constraint in step (2.1), the system (S') must be infeasible and then, from Lemma 4, [bar.u]' ≠ [bar.v]'.

Lemma 8. Let f be a convex function defined on a convex set C, and let D be a convex body (convex set the interior of which is not empty), such that the intersection C [intersection] D is not empty. Define the two problems:

I. minimize f (x), subject to x [member of] C

II. minimize f (x), subject to x [member of] C [intersection] D.


Suppose first that the algorithm ends up in Step (2. 1). Since [bar.u]' = [bar.v]', the solution is optimal for (P(I, J)) (see Lemma 1); since (γ, δ) ≥ 0, the solution is optimal for (P) (apply the usual optimality criterion for linear programming).

Suppose next that the algorithm ends up in Step (1. 2. 1). We have found a feasible (from Lemma 3) direction of change for x and y, which is not bounded, and which gives an infinite decrease in the objective function. The same result is true if the algorithm ends in Step (0. 1).

Suppose we cycle indefinitely: we would pass an infinite number of times through step (1. 2). From Lemma 9, at each passage through step (1. 2), [bar.θ] > 0, and then either I or J decreases by one element. Thus we must also pas s an infinite number of times through step (2). Thus there will be two pas s ages in step (2) with the same sets of indices I, J. From one pas sage to the next the objective function would have decreased (see Lemmas 3, 7, 9) : this is a contradiction, since at each pas s age we have an optimum for (P(I, J)), from Lemma 1.


4. Applications of the algorithm:

There is no conceptual difficulty for applying this algorithm to the case of more than two subproblems. For instance, for an initial problem with three subproblems:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

the system (S) would write:

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


We see that the drawback of this method is that in general system (S) is of a very big size (this is true even with only two problems). If t is the number of subproblems, the size of (S) is:

Let Problem I have a unique solution xI, and let xII be a solution of II. If xI [not member of] D, then:

(i) f (xI) < f (xII);

(ii) xII belongs to the boundary of D.


Proof: (i) being obvious, it remains t o prove (ii). Suppose xII be an interior point of D, and let V be an open neighborhood of xII excluding any boundary point of D. The point xII is optimal for the following problem:


III. minimize f(x), subject to x [member of] C [intersection] v.

Since C [intersection] V is a set relatively open in C, it follows that xII is locally optimal for Problem I; by convexity, it follow s that xII is globally optimal for Problem I: this is a contradiction to the uniqueness of the solution of Problem I.

Lemma 9. [bar.θ] > 0 at every pas sage through Step (1. 2).

Proof: Suppose that, at some passage in Step (1. 2), we have [bar.θ] = 0.


(Continues...)

Excerpted from Proceedings of the Princeton Symposium on Mathematical Programming by Harold W. Kuhn. Copyright © 1970 Princeton University Press. Excerpted by permission of PRINCETON UNIVERSITY PRESS.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

  • Frontmatter, pg. i
  • PREFACE, pg. iii
  • CONTENTS, pg. iv
  • TWO METHODS OF DECOMPOSITION FOR LINEAR PROGRAMMING, pg. 1
  • MATRIX GENERATORS AND OUTPUT ANALYZERS, pg. 25
  • DECOMPOSITION APPROACHES FOR SOLVING LINKED PROGRAMS, pg. 37
  • LARGE SCALE SYSTEMS AND THE COMPUTER REVOLUTION, pg. 51
  • STOCHASTIC GEOMETRIC PROGRAMMING, pg. 73
  • THE CURRENT STATE OF CHANCE-CONSTRAINED PROGRAMMING, pg. 93
  • ON PROBABILISTIC CONSTRAINED PROGRAMMING, pg. 113
  • STOCHASTIC PROGRAMS WITH RECOURSE: SPECIAL FORMS, pg. 139
  • NONLINEAR ACTIVITY ANALYSIS AND DUALITY, pg. 163
  • DUALITY IN DISCRETE PROGRAMMING, pg. 179
  • INTEGER PROGRAMMING: METHODS, USES, COMPUTATION, pg. 199
  • ON RECENT DEVELOPMENTS IN INTEGER PROGRAMMING, pg. 267
  • ON MAXIMUM MATCHING, MINIMUM COVERING AND THEIR CONNECTIONS, pg. 303
  • PROGRAMMES MATHETIQUES NONLINEARES A VARIABLES BIVALENTES, pg. 313
  • ENUMERATION ALGORITHMS FOR THE PURE AND MIXED INTEGER PROGRAMMING PROBLEM, pg. 323
  • ON NEWTON’S METHOD IN NONLINEAR PROGRAMMING, pg. 339
  • EXTENDING NEWTON'S METHOD TO SYSTEMS OF LINEAR INEQUALITIES, pg. 353
  • INTERIOR POINT METHODS FOR THE SOLUTION OF MATHEMATICAL PROGRAMMING PROBLEMS, pg. 359
  • THE SOLUTION OF A FIXED CHARGE LINEAR PROGRAMMING PROBLEM, pg. 367
  • ON CLASSES OF CONVEX AND PREEMPTIVE NUCLEI FOR N-PERSON GAMES, pg. 377
  • WHEN IS A TEAM ”MATHEMATICALLY“ ELIMINATED?, pg. 391
  • ON THE EXISTENCE OF OPTIMAL DEVELOPMENT PLANS, pg. 403
  • OPTIMALITY AND DUALITY IN NONLINEAR PROGRAMMING, pg. 429
  • GEOMETRIC PROGRAMMING: DUALITY IN QUADRATIC PROGRAMMING AND lp -APPROXIMATION I, pg. 445
  • CONJUGATE CONVEX FUNCTIONS IN NONLINEAR PROGRAMMING, pg. 481
  • A COMPARATIVE STUDY OF NONLINEAR PROGRAMMING CODES, pg. 487
  • MINIMIZATION OF A SEPARABLE FUNCTION SUBJECT TO LINEAR CONSTRAINTS, pg. 503
  • DIRECT HYPERCONE UNCONSTRAINED MINIMIZATION, pg. 511
  • NONLINEAR PROGRAMMING AND SECOND VARIATION SCHEMES IN CONSTRAINED OPTIMAL CONTROL PROBLEMS, pg. 523
  • ON CONTINUOUS FINITE-DIMENSIONAL CONSTRAINED OPTIMIZATION, pg. 539
  • QUADRATIC FORMS SEMI-DEFINITE OVER CONVEX CONES, pg. 551
  • APPLICATIONS OF PRINCIPAL PIVOTING, pg. 567
  • LEAST DISTANCE PROGRAMMING, pg. 583
  • MAXIMIZING STATIONARY UTILITY IN A CONSTANT TECHNOLOGY, pg. 591
  • OPTIMUM ROUTING IN CONNECTING NETWORKS OVER FINITE TIME INTERVALS, pg. 591
  • SECOND ORDER CHARACTERIZATION OF GENERALIZED POLYNOMIAL PROGRAMS, pg. 591
  • ORTHOGONALITY IN MATROIDS AND MATHEMATICAL PROGRAMMING, pg. 592
  • A THEORETICAL ANALYSIS OF INPUTS TAXATION UNDER LINEAR PROGRAMMING ASSUMPTIONS, pg. 592
  • ON THE SOLUTION OF STRUCTURED LP PROBLEMS USING GENERALIZED INVERSED METHODS, pg. 593
  • A METHOD FOR CONVEX PROGRAMMING, pg. 594
  • CONTINUOUS MATHEMATICAL PROGRAMMING UNDER LINEAR INTEGRAL CONSTRAINTS, pg. 594
  • RECENT DEVELOPMENTS IΝ GEOMETRIC PROGRAMMING, pg. 595
  • NONCONVEX AND CONVEX PROGRAMMING BY GENERALIZED SEQUENTIAL UNCONSTRAINED METHODS, pg. 595
  • MATHEMATICAL PROGRAMMING BY PHYSICAL ANALOGIES, pg. 595
  • THE MAX-FLOW MIN-CUT EQUALITY AND THE LENGTH-WIDTH INEQUALITY FOR REAL MATRICES, pg. 596
  • IMPLICIT ENUMERATION USING AN IMBEDDED LINEAR PROGRAM, pg. 596
  • A PRINCIPAL PIVOTING ALGORITHM FOR LINEAR AND QUADRATIC PROGRAMS, pg. 597
  • ON THE KUHN-TUCKER THEORY, pg. 597
  • PROGRAMMING IN INFINITE DIMENSIONAL SPACE AND CONTROL THEORY, pg. 599
  • A FUNCTIONAL APPROACH TO THE DESIGN AND DEVELOPMENT OF A MATHEMATICAL PROGRAMMING SYSTEM, pg. 600
  • MAGEN II, pg. 600
  • A PARTICULAR CLASS OF FINITE-TIME MARKOV-RENEWAL PROGRAMMING SITUATIONS, pg. 601
  • A DECOMPOSITION ALGORITHM FOR SHORTEST PATHS IN A NETWORK, pg. 601
  • THE DEGREE-CONSTRAINED SUBGRAPH PROBLEM, pg. 601
  • MATHEMATICAL PROGRAMMING AND PROJECT INTERRELATIONSHIPS IN CAPITAL BUDGETING, pg. 602
  • OPTIMUM DESIGN OF LONG PIPE LINE NETWORKS FOR GASES USING LINEAR PROGRAMMING, pg. 603
  • CENTRALIZATION AND DECENTRALIZATION OF DECISION MAKING - THE DOUBLE DECOMPOSITION METHOD - GENERALIZATION AND PROOF OF CONVERGENCE, pg. 603
  • NONLINEAR PROGRAMMING AND ENGINEERING DESIGN, pg. 605
  • NONLINEAR PROGRAMMING VIA ROTATIONAL DISCRIMINATION, pg. 606
  • COMPLEMENTARY SOLUTIONS FOR SYSTEMS OF LINEAR EQUATIONS, pg. 607
  • ON THE COMPLEXITY OF DISCRETE PROGRAMMING PROBLEMS, pg. 608
  • ILL-CONDITIONING OCCURRING IN PENALTY AND BARRIER FUNCTIONS, pg. 608
  • THE SHIFTING-OBJECTIVE ALGORITHM: AN ALTERNATIVE METHOD FOR SOLVING GENERAL LINEAR PROGRAMMING PROBLEMS, pg. 609
  • THE BLOCK PRODUCT DECOMPOSITION ALGORITHM, pg. 610
  • ITERATIVE SOLUTION OF LARGE, SPARSE LINEAR SYSTEMS AND RELATED TOPICS, pg. 611
  • APPROXIMATE COMPUTATIONAL SOLUTION OF NONLINEAR PARABOLIC PARTIAL DIFFERENTIAL EQUATIONS BY LINEAR PROGRAMMING, pg. 612
  • THE SHRINKING BOUNDARY ALGORITHM FOR DIOPHANTINE PROGRAMMING, pg. 612
  • APPLICATION OF NONLINEAR PROGRAMMING AND BAYESIAN STATISTICS TO THE THEORY OF THE FIRM, pg. 613
  • AN ALGORITHM FOR LINEARLY CONSTRAINED NONLINEAR ESTIMATION WITH EXTENSIONS TO GENERAL NONLINEAR PROGRAMMING, pg. 613
  • ALGORITHMS FOR MINIMAL TREES AND SEMI-STEINER TREES BASED ON THE SIMPLEX METHOD, pg. 614
  • APPLICATION OF LINEAR AND NONLINEAR PROGRAMMING IN OPTIMAL CONTROL OF NUCLEAR REACTORS, pg. 615
  • A NEW ALGORITHM FOR TRANSPORTATION AND ASSIGNMENT PROBLEMS, pg. 616
  • THE SOLUTION OF PROGRAMMING PROBLEMS BY GENERATING FUNCTIONS, pg. 616
  • ON SOLVING SOME CLASSES OF INTEGER LINEAR PROGRAMMING PROBLEMS, pg. 616
  • AN EXTENDED DUALITY THEOREM FOR CONTINUOUS LINEAR PROGRAMMING PROBLEMS, pg. 617
  • THE GUIDING PERMUTATION METHOD FOR COMBINATORIAL PROBLEMS, pg. 617
  • MINIMIZING A CONVEX FUNCTION OVER A CONVEX HULL, pg. 618
  • GAMES, LINEAR PROGRAMMING AND THE TCHEBYCHEFF APPROXIMATION, pg. 619
  • ON A PRIMAL INTEGER PROGRAMMING ALGORITHM, pg. 619
  • APPLICATIONS OF THE CONVERGENCE CONDITIONS, pg. 620



From the B&N Reads Blog

Customer Reviews