• Ingen resultater fundet

The scaling problem arise when the variables of a problem is not of the same order of magnitude, e.g. a problem of designing an electrical circuit, where one variable is measured

in Farad and the other in Ohm. The value in Farad would be of magnitude 10’ 12and Ohm would be of magnitude 106. This is a huge difference, that not alone is an indication of bad scaling, but would also introduce severe numerical problems.

One such problem is loss of information [CGT00, p. 162]. We have that ck is measured in Farad and rkis measured in Ohm. Now we wish to calculate the norm of a step hk rk ck

hk 2 rk ck 2 rk 2a ck 2

We see that rk}b 1012and ck3b 10’ 24, and if the computer is limited to 15 digit preci-sion then hk 2— rk 2. In this way the information from ckis lost!

In the implementation of the SLP and CSLP algorithms there is an underlying assumption that the problems are well scaled. This is not an unreasonable assumption, because it is expected that qualified personal is able to do such rescaling of their problems fairly easy.

However we will in the following give a short outline on how such a rescaling could be made internally in an algorithm.

All numerical calculations inside an algorithm should be done in a scaled space, so a scaled variable w is needed

Skw h (5.11)

The scaling matrix Sk $ IRn~ nis a diagonal matrix when only a scaling parallel to the axis are needed, however it could also be something different from than a diagonal matrix. In this case the scaling would also be a rotation of the scale space. In the non diagonal case, Skcould be determined from a eigenvalue decomposition of the Hessian H.

From the previous example, a user could supply the algorithm with the scaling matrix S, as a pre process step.

S

H

106 0

0 10’ 12 I (5.12)

Here the subscript k has been omitted, because the scaling matrix is only given in the pre process stage and never changed again. A scaling matrix could, however, depend on the current iterate if the problem is non linear and have local bad scaling.

We have a model Mkxk h of the problem we want to solve. Because of bad scaling we re-scale the problem by transforming the problem into a scaled space denoted by superscript s

Mksxk w¿b fxk Skw-" fsw[ (5.13)

We can then write a linear model of the problem

Mksxk- fxk gsk wMksxk-wfs0xfxk Sk (5.14) where

Mksxk ww Mks gskw

fxkxfxkSkw

fxkxfxkh

Mkxk h (5.15)

PSfrag replacements

xk

PSfrag replacements

xk

PSfrag replacements

wk

Figure 5.9: Left: The unscaled space and the unscaled trust region. Middle: The unscaled space and the scaled trust region. Right: The scaled space and the scaled trust region.

The trust region is affected by the transformation to the scaled space and back to the original space. In the scaled space the trust region subproblem is defined as

min

©

w

©

Ë ηMskxk w3 (5.16)

and in the original space defined as min

©

SÌ 1h

©

Ë ηMkxk h3 (5.17)

We see that the trust region in original space is no longer a square but a rectangle, however, it is still a square in the scaled space. This is shown in figure 5.9. If Sk is a symmetric positive definite matrix with values off the diagonal, then we will get a scaling and rotation of the scaled space.

Chapter 6

Linprog

Matlabs optimization toolbox offers a wide range of tools to solve a variety of problems.

The tools span form general to large scale optimization of nonlinear functions, and further it also contains tools for quadratic and linear programming.

At first, the source code in the toolbox can seem overwhelming, but is in fact both well structured and well commented. Still, however, the algorithms may at first sight seem more logical complex, than they ought to be. For instance, many algorithms have branching (if, switchetc.) that depends on the calling algorithm. A straight forward explanation is, that in this way code redundancy i.e. the amount of repeated code, is reduced to a minimum.

This is good seen from a maintenance perspective.

The toolbox offers many exciting possibilities and have many basic parts that one can use to build new algorithms. Just to list a few of the possibilities this toolbox offers.

½ Linear and quadratic programming.

½ Nonlinear optimization e.g. 2-norm and minimax, with constraints

½ A flexible API that ensures that all kinds of parameters can be passed directly to the test problems.

A good overview and documentation of the toolbox is given in [Mat00] that covers the basic theory, supplemented with small tutorials. This chapter will focus on the programlinprog ver.1.22 that is the linear solver in Matlab’s optimization toolbox. The version of the optimization toolbox investigated here is

Version 2.1.1 (R12.1) 18-May-2001

Currently a newer version 2.2 of the toolbox is available at MathWorks homepage1. This chapter is build like this: bla bla.

6.1 How to use linprog

Linprog is a program that solves a linear programming problem. A linear programming problem seeks to minimize or maximize a linear cost function, that is bounded by linear

1http://www.mathworks.com/products/optimization/

constraints. This means that the feasible region of an LP problem is convex, and since the cost function is linear, the solution of the LP problem must exist at a extreme point or a vertex of the feasible region.

The constraints in linear programming can be either equality or inequality constraints, and hencelinprogcan solve the following type of problem.

minx Fx[ x $ IRn subject to

cix 0 i 1me cix3A 0 i me@m aA xA b

where a and b are lower and upper bounds. The above problem is non-linear so in order to get a linear programming problem, all the constraints should be linearized by a first order Taylor approximation, so that we can formulate the above problem as

minxfTx subject to AxA b Aeqx beq lbA xA ub

where f ∇Fx is a vector, and the gradients of the constraints Aeq and A are matrices.

The above problem can be solved bylinprog, by using the following syntax x = linprog(f,A,b,Aeq,beq,lb,ub,x0,options) [x,fval,exitflag,output,lambda] = linprog(...)