ENTRUST
Optimization Using Trust Region Methods


ENTRUST is a MATLAB library which implements a variety of algorithms for finding a vector X which minimizes the value of a scalar function F(X), by Jeff Borggaard and Gene Cliff.

ENTRUST is a driver for the solution of an unconstrained optimization problem using line search or trust-region globalization strategies and several types of secant update strategies.

ENTRUST also includes the capability to handle least-squares problems, in which a vector X is sought which minimizes the sum of squares of the components of a nonlinear vector function R(X).

Both optimization and nonlinear minimization can also be performed with "box constraints", which require the solution to lie within a specified rectangular region. These constraints are incorporated into the trust-region algorithm.

The Scalar Optimization Problem

For problems in scalar optimization, we're seeking a vector X which minimizes the value of the scalar functional F(X).

Related versions of this problem try to maximize F(X), or restrict the range of acceptable X to some rectangular region. ENTRUST can handle these related problems if the user requests them.

To specify an optimization problem, we assume we have

and, given a starting estimate X0 for the minimizer, we seek a vector X* which minimizes the value F(X*).

Linear and Nonlinear Least Squares

In least-squares problems, we're seeking a vector X which minimizes the Euclidean norm of a vector of functions R(X). The dimension of X is denoted by N whereas the dimension of R is denoted by M. Often M is larger, even much larger, than N.

A common source of such problems occurs in data fitting. For example, we might seek the coefficients A and B of a linear model - a line in the plane that comes closest to M pairs of data values. Here N will be 2 (the two unknown coefficients), but M, the number of data pairs, can be much larger. This generalizes to polynomial models; a polynomial of degree N-1 requires N unknown coefficients, which will play the role of the vector X. In such cases, the residual vector R(X) is actually linear in the parameters, and we have a linear least squares problem.

To specify a problem in least squares, we assume we have:

and, given a starting estimate X0 of the minimizer, we seek a vector X* which minimizes the sum of the squares of the entries of R(X*).

Optimization versus Least Squares

The Optimization and Nonlinear Least Squares problems are related, though not equivalent.

If we have a least squares problem (R and J), we can create a corresponding optimization problem. We simply define

F(X) = sum ( 1 <= i <= M ) Ri(X)2
Certainly, minimizing F will minimize the sum of squares of the entries of R.

If we have an optimization problem (F, G and H), we can create a related least squares problem. We simply define

R(X) = G(X)
Note that for this least squares problem, we will have M=N. We minimize the norm of R(X), and if this minimum norm is zero, we have a critical point of our optimization function F; however, the critical point is not a guaranteed minimizer of F, though for some problems, there is extra information that can tell us we have, indeed, found a minimizer.

Solving a Problem using ENTRUST

To solve a problem, the user must first encode the information that defines the problem.

For an optimization problem, write a MATLAB "FGH" M-file of the form


        [ f, g, H ] = fname ( x, flag )
      
which returns the values of the optimization function, gradient vector, and Hessian matrix evaluated at x. (The input value flag should generally be defined to be the empty value, that is, "flag=[]".) The value of the Hessian is only needed when a Newton method is being used. Otherwise, the M-file does not need to compute the Hessian, and can simply return an empty value.

For a least squares problem, write a MATLAB "RJ" M-file of the form


        [ r, J ] = fname ( x, flag )
      
which returns the values of the nonlinear functions and the jacobian evaluated at x. (The input value flag should generally be defined to be the empty value, that is, "flag=[]".)

The user must also set an initial value for the solution estimate X0. If no estimate is known, it is acceptable to set X0 to zero, but note that changing the initial value can cause the optimization code to converge quickly, very slowly, or to fail to converge. In some cases, changing the initial value can mean that the program converges, but to an entirely different solution with a different optimal value.

Once the problem definition has been taken care of, it is necessary to choose a solution algorithm, and, if desired, to change some of the solution parameters from their default value. All information about the solution algorithm is contained in a single MATLAB structure called options, and any choice the user makes is recorded in this single quantity. The first thing the user should do is "zero out" this quantity, so that it starts with all default values:

options = [];

ENTRUST provides four solution algorithms for the user:

To choose a solution algorithm, the user sets the method component of the options structure. A typical command would be:

options.method = 'newton';

It's probably a good idea to try to run the problem once without specifying any of the other components of options. The results of the first run, if unsatisfactory, can then be used as a guide for determining how to adjust some values of options to get a better result. Typical values that might be changed on a second run would involve

And of course, there are a number of other parameters that may be adjusted.

Example:

To seek a minimizer for

f(x) = (x1-2)4+((x1-2)*x2)2+(x2+1)2
we write an FGH routine of the form:
      function [ f, g, H ] = test_fgh ( x, flag )

      f = ( x(1) - 2 )^4...
        + ( ( x(1) - 2 ) * x(2) )^2...
        + ( x(2) + 1 )^2;

      g(1) = 4 * ( x(1) - 2 )^3 ...
           + 2 * ( x(1) - 2 ) * x(2)^2;

      g(2) = 2 * ( x(1) - 2 )^2 * x(2)...
           + 2 * ( x(2) + 1 );

      H = []; 
      
We are setting the Hessian matrix H to an empty value. We do not intend to use a Newton method, so the Hessian is not needed.

Assuming that our starting point is (1,2), we now issue the following commands:

        fname = 'test_fgh';

        x0 = [ 1; 2 ];

        options = [];
        options.method = 'secant';

        x = entrust ( fname, x0, options );
      
Note that you can also use the "at sign" to refer to the function directly:
        x = entrust ( @test_fgh, x0, options );
      

The options structure:

ENTRUST has a default solution procedure, but it allows the user to control the computation by overriding the default values of solution parameters. The solution procedure is specified in a MATLAB structure called options, which is the third input argument.

The user should always initialize options to its default value, by a command like:


        options = [];
      

Then the user can specify particular solution parameters by assigning components of options. For instance, options has a component called options.method that specifies the algorithm to be used. This defaults to 'secant'. Here is how a different algorithm can be chosen:


        options.method = 'newton';
      

The most common components to be changed include the method, gradient_tolerance, max_iterations and max_fevals. The scripts that run the examples include many cases where components are varied.

General options:

options.free_g
Flag that determines whether or not the gradient G is calculated simultaneously with F;
default: options.free_g = 1;
options.goal
Goal of the problem, supported options include: 'minimization' and 'maximization';
default: options.goal = 'minimization';
options.method
Type of optimization method, supported options include: 'newton', 'secant', 'steepest_descent', and 'gauss_newton';
default: options.method = 'secant';
options.scale_x
a vector of length (n_desvar) which contains typical values for the design parameters;
default: options.scale_x = ones(n_desvar,1);
options.scale_f
A typical magnitude of the objective function;
default: options.scale_f = 1.0;
options.verbose
Output flag 0 - no output, 1 - print output;
default: options.verbose = 1;
options.x_lower
a vector of length (n_desvar) which contains lower bounds for the design variables;
default: options.x_lower =-realmax;
options.x_upper
a vector of length (n_desvar) which contains upper bounds for the design variables;
default: options.x_upper = realmax;

Stopping criteria:

options.gradient_tolerance
Value of the gradient for which convergence is declared;
default: options.gradient_tolerance = 0.00001;
options.max_iterations
Maximum number of main iterations in the optimization;
default: options.max_iterations = 10;
options.max_fevals
Maximum number of calls to 'fname';
default: options.max_fevals = 30;
options.step_tolerance
Value of the change in parameter values for which convergence is declared. May not occur at a local minimum;
default: options.step_tolerance = 0.00001;

Globalization criteria:

options.globalization
Method used to perform globalization, supported options include: 'line_search', 'trust_region' and 'none';
default: options.globalization = 'none';
options.max_step
Maximum 'trust_region' radius or line search step;
options.alpha
(for 'line_search' globalization);
default: options.alpha = 0.0001, see D&S, p. 126;
options.tr_radius
(for 'trust_region' globalization) Initial 'trust_region' radius;
default: obtained through Cauchy step;

Examples:

There are a number of example problems available. Most are available in two versions, as an optimization "FGH" problem and as a nonlinear least squares "RJ" problem:

Note that fminsearch is a MATLAB built in command which minimizes a scalar function of several variables using the Nelder-Mead algorithm.

Related Data and Programs:

ASA047, a FORTRAN77 library which minimizes a scalar function of several variables using the Nelder-Mead algorithm.

COMPASS_SEARCH, a MATLAB library which seeks the minimizer of a scalar function of several variables using compass search, a direct search algorithm that does not use derivatives.

DQED, a FORTRAN90 library which solves constrained least squares problems.

MINPACK, a FORTRAN90 library which solves systems of nonlinear equations, or the least squares minimization of the residual of a set of linear or nonlinear equations.

NELDER_MEAD, a MATLAB program which minimizes a scalar function of multiple variables using the Nelder-Mead algorithm.

NL2SOL, a FORTRAN90 library which implements an adaptive nonlinear least-squares algorithm.

PRAXIS, a FORTRAN90 library which minimizes a scalar function of several variables, without derivative information.

TEST_NLS, a FORTRAN90 library which defines test problems requiring the least squares minimization of a set of nonlinear functions.

TEST_OPT, a MATLAB library which defines test problems requiring the minimization of a scalar function of several variables.

TOMS178, a MATLAB library which optimizes a scalar functional of multiple variables using the Hooke-Jeeves method.

TOMS611, a FORTRAN90 library which optimizes a scalar functional of multiple variables.

UNCMIN, is a FORTRAN77 library which can be used to seek the minimizer of a scalar functional of multiple variables.

Author:

Jeff Borggaard,
Gene Cliff,
Virginia Tech.

Reference:

  1. John Dennis, Robert Schnabel,
    Numerical Methods for Unconstrained Optimization and Nonlinear Equations,
    SIAM, 1996,
    ISBN13: 978-0-898713-64-0,
    LC: QA402.5.D44.
  2. David Himmelblau,
    Applied Nonlinear Programming,
    McGraw Hill, 1972,
    ISBN13: 978-0070289215,
    LC: T57.8.H55.
  3. Jorge More, Burton Garbow, Kenneth Hillstrom,
    Testing unconstrained optimization software,
    ACM Transactions on Mathematical Software,
    Volume 7, Number 1, March 1981, pages 17-41.
  4. Jorge More, Burton Garbow, Kenneth Hillstrom,
    Algorithm 566: FORTRAN Subroutines for Testing unconstrained optimization software,
    ACM Transactions on Mathematical Software,
    Volume 7, Number 1, March 1981, pages 136-140.

Source Code:

Examples and Tests:

OPT01 is Dennis and Schnabel example 1, with M=N=2.

OPT02 is Dennis and Schnabel example 2, with M=N=2.

OPT03 is Dennis and Schnabel example 3 with M=3, N=1. It includes a parameter which allows the problem to be changed so that the optimum value of F is zero or nonzero. This demonstrates how the Gauss-Newton procedure can be made to fail.

OPT04 is the Himmelblau function, with M=2, N=2. The function has four global minima, where F=0. In the demonstration, the Newton method goes to one minimum, the secant method fails, and the Gauss-Newton method reaches a different minimum from the same initial point.

OPT05 is the Rosenbrock "banana" function, with M = N = 2.

OPT06 is the extended Rosenbrock "banana" function, with M=N=arbitary multiple of 2.

OPT07 is the helical valley function, with M=N=3.

OPT08 is the extended Powell singular function, with M=N=an arbitrary multiple of 4.

OPT09 is the trigonometric function, with M=N=arbitrary.

OPT10 is the Wood function, with M=6 and N=4.

OPT11 is the box-constrained quartic, with M=N=3.

OPT12 is the Beale function, with M=3 and N=2. Convergence is relatively easy with the starting point [1,1], but hard to achieve with [1,4].

OPT13 is a polynomial example, with M=N=2. It has a local minimum for which F is about 5, and a global minimum for which F is 0.

OPT14 is a polynomial example, with M=N=3.

OPT15 is a test case, with M=N=2.

You can go up one level to the MATLAB source codes.


Last modified on 26 March 2008.