• Ingen resultater fundet

Linear Transformation

4.2 The Rosenbrok Problem

4.2.2 Linear Transformation

Inthis example we usethefollowing transformationof thene model:

u(x) = Cx + d =

1.1 − 0.2 0.2 0.9

x 1 x 2

+ − 0.3

0.3

(4.2.1)

The orresponding ne model optimizer rounded to

4

deimals is

x = [1.2718 , 0.4951]

T.

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5 0 0.5 1 1.5 2

Coarse model

z 1 z 2

z * = [ 1.0000 , 1.0000 ] T

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5 0 0.5 1 1.5 2

Fine model

x 1 x 2

x * = [ 1.2718 , 0.4951 ] T

Figure4.2.1: Contourplots of theRosenbrokfuntion: oarse model(left)

andne model (right)

Weshowthe levelurvesof theoarseand nemodelsingure4.2.1,where

the objetive funtion is

F = k·k 2 2

. The ne model is very similar to the

oarse model (the original Rosenbrok funtion) and the harateristi

ba-nana shapeis stillpresent.

In all the test results with the Rosenbrok funtion we use the options

ε 1 = 10 14

,

ε 2 = 10 14

and opts

=

[1e-8 1e-14 1e-14 200 1e-12℄. The

initial guessforthe oarsemodeloptimizeris

x (0) = [ − 1.2 , 1.0]

T. Weshow

the iteration sequenes also after the algorithm has onverged to the

opti-mizer.

Eet of the Regularization

The onvergene is very fast for both the ase with regularization and the

asewithout. The performanes areshown ingures4.2.2 and 4.2.3.

We notie the absene of the points of iteriation

6

in gure 4.2.2, beause

thevalue

0

is not visibleinthesemilogarithmi plot.

0 2 4 6 8 10 10 −20

10 −15 10 −10 10 −5 10 0 10 5

Performance

Iteration

||x (k) −x * ||

2 F(x (k) ) − F(x * )

Figure 4.2.2: Withregularization

0 2 4 6 8 10

10 −20 10 −15 10 −10 10 −5 10 0 10 5

Performance

Iteration

||x (k) −x * ||

2 F(x (k) ) − F(x * )

Figure 4.2.3: Without

regulariza-tion

runs: Theonvergeneisalittlefasterwithouttheregularizationtermadded

to the residual vetor. When we solve the unregularized problem, we have

n p = 7

and thereisapossibilityofanoverdeterminedParameterExtration probleminiteration

6

and onwards. For this problemtheunderdetermined Parameter Extrationproblems donot have anegative eet onthe

onver-generate, sine we ndthene modeloptimizer before iteration

6

.

Theiteration pointsorrespondingto gure4.2.3 areshowninthe

(x 1 , x 2 )

-plane with the objetive funtion

F = k f(x) k 2 2

. It is noted that the rst

iteration point plotted is the initial guess for the oarse model optimizer.

Therst evaluation ofthene modelis madeinthepoint

x (1)

whih isthe

seondpoint plotted ingure4.2.3.

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5 0 0.5 1 1.5 2

Iteration sequence

x 1 x 2

Iteration points

Figure 4.2.4: Sequene ofiteration points

Withthe SpaeMappingalgorithm we avoid theiteration sequenemoving

Eet of the Normalization Fators

Thenext test runsaremadewithall normalization fatorsequal to

1

.

Theiterationsequenesingures4.2.5-4.2.6arealmostidentialwithgures

4.2.2-4.2.3,andwe onlude,thatthenormalizationfatorshavepratially

noeet inthe Rosenbrok problem.

0 2 4 6 8 10

10 −20 10 −15 10 −10 10 −5 10 0 10 5

Performance

Iteration

||x (k) −x * ||

2 F(x (k) ) − F(x * )

Figure 4.2.5: With regularization

andwithout normalization

0 2 4 6 8 10

10 −20 10 −15 10 −10 10 −5 10 0 10 5

Performance

Iteration

||x (k) −x * ||

2 F(x (k) ) − F(x * )

Figure 4.2.6: Without

regulariza-tionand withoutnormalization

Eet of the Weighting Fators

We an not use this problem for testing the eet of theweighting fators.

Theweighting fatorsfromthe Gaussdistributed weight funtion approah

areonlydierent fromzerofromiterationnumber

6

. Atthis point the

solu-tionis alreadyfound.

Eet of the Number of Mapping Parameters

Inthisasetheresultsareverydierent,whenweusethereduedparameter

vetorinsteadoftheomplete. We testtheperformane ofthealgorithm in

thefollowing threeases:

Withregularization

Withoutregularization

Withoutregularization and withweighting fators

All threeases are withnormalization of the residual elements. The results

0 5 10 15 20 25 30 10 −20

10 −15 10 −10 10 −5 10 0 10 5

Performance

Iteration

||x (k) −x * || 2 F(x (k) ) − F(x * )

Figure4.2.7: With

regu-larization

0 5 10 15 20 25 30

10 −20 10 −15 10 −10 10 −5 10 0 10 5

Performance

Iteration

||x (k) −x * || 2 F(x (k) ) − F(x * )

Figure 4.2.8: Without

regularization

0 5 10 15 20 25 30

10 −20 10 −15 10 −10 10 −5 10 0 10 5

Performance

Iteration

||x (k) −x * || 2 F(x (k) ) − F(x * )

Figure 4.2.9: Without

regularization and with

weighting

Theonvergene is muh slower inall three ases ompared to gures 4.2.2

and 4.2.3. There is only a little dierene in the iteration sequenes in

g-ures 4.2.7, 4.2.8 and 4.2.9 in the last iterations, when we are lose to the

optimizer. The onvergene rateisthe same for all threeases.

The table below shows the values of

k x (k+1) − x k 2 / k x (k) − x k 2

for

k = 1, . . . , 24

orrespondingto the results of gure4.2.7.

k

k x (k+1) − x k 2

k x (k) − x k 2

k

k x (k+1) − x k 2

k x (k) − x k 2

k

k x (k+1) − x k 2

k x (k) − x k 2

1 6.4062e-01 9 2.0771e-01 17 2.0771e-01

2 3.4325e-01 10 2.0770e-01 18 2.0768e-01

3 1.6551e-01 11 2.0770e-01 19 2.0758e-01

4 2.1301e-01 12 2.0769e-01 20 2.0725e-01

5 2.0869e-01 13 2.0769e-01 21 2.0764e-01

6 2.0793e-01 14 2.0769e-01 22 1.0000e+00

7 2.0778e-01 15 2.0769e-01 23 2.0577e-01

8 2.0774e-01 16 2.0770e-01 24 1.9426e-01

We notethat theasymptoti error onstant is approximately

0.2

from

iter-ation

4

to

21

. Theresults indiatelinear onvergene.

In the ase of the redued parameter vetor we have

n p = 5

unknown

pa-rametersineveryParameterExtrationproblem. Thetransformationofthe

nemodelparametersisdenedbyanon-diagonalmatrix

C

,andapparently

thisreatesproblems,whenaligningthesurrogatemodelwiththenemodel.

Ingure4.2.10theiteration pointsfromgure4.2.8areseenintheontour

plot of

F = k f (x) k 2 2

. It is similar to gure4.2.4, exept for the fat thata

lotof pointsare lusterednearthe optimizer

x = [1.2718 , 0.4951]

T.

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5 0 0.5 1 1.5 2

Iteration sequence

x 1 x 2

Iteration points

Figure 4.2.10: Sequene of iteration points

Optimal Mapping Parameters

TheRosenbrokfuntionisspeialinthe sense,thatthetwo response

fun-tions are qualitatively dierent. The rst response is a quadrati funtion

and depends on both

z 1

and

z 2

, whereas the seond is linear and only

de-pends on

z 1

. TheJaobian matrix is:

− 20z 1 10z 2

− 1 0

Sine

∂c 2 /∂z 2 = 0

and with referene to setion 3.4 equation (3.4.7) this

means that:

∂s 2 (x, p)

∂p = α 2 c 2,z (z) · H

= α 2

− 1 0

x 1 x 2 0 0 1 0 0 0 x 1 x 2 0 1

= − α 2

x 1 x 2 0 0 1 0

We have no information of the mapping parameters

A 21

,

A 22

and

b 2

on-erning

z 2

, sine all the partial derivatives wrt. these parameters are zero.

This inuenes the Parameter Extration for response funtion number

2

.

The variables are never hanged during the residual optimization, and the

nal values are

A 21 = 0

,

A 22 = 1

and

b 2 = 0

orresponding to the initial values.

This theory is onrmed when looking at the results from the Spae

Map-pingalgorithm. Theoptimalparametersets arefortheregularized aseand

A 1 =

0.9567 − 0.1489 0.0370 0.9930

b 1 =

− 0.0678 0

α 1 = 0.9899 A 2 =

1.0329 − 0.1878

0 1

b 2 = 0

0

α 2 = 1.0649

For the ase when

A

is redued to a diagonal matrix we get the optimal

mapping parameters withuseofthe regularization term(gure 4.2.7):

A 1 =

0.7811 0 0 1.1721

b 1 =

0.1609 0

α 1 = 1.1091 A 2 =

1 0 0 1

b 2 = 0

0

α 2 = 1

Theresults withno regularization (gure 4.2.8)arenot qualitatively

dier-ent:

A 1 =

0.7923 0 0 1.3003

b 1 =

0.2548 0

α 1 = 0.9997 A 2 =

1 0 0 1

b 2 = 0

0

α 2 = 1

It is noted, that the mapping parameters for response funtion

2

are

iden-tialwith the initial mapping parameters in both ases. Thisprobably has

something to do with the fat, that the seond response funtion is linear

andonly dependsontherst variable.

Diret Optimization

We nallypresent theresults from diretoptimization of thene modelby

the two algorithms diret and diretd from the SMIS framework

imple-mentedby FrankPedersen.

Bothiteration sequenes in gure4.2.11 onverge very slowly,whih isalso

seenfromtheplots ingure4.2.12.

Fromthe given initial guess

x (0) = [ − 1.2 , 1]

theiterates move through the

valley with a large number of small steps towards the optimizer. This

be-haviouroftheiterationsequeneisavoidedfortheSpaeMappingalgorithm.

Itisobvious, thattheSpaeMappingalgorithm ismuhmoreeient than

0 10 20 30 40 50 10 −20

10 −15 10 −10 10 −5 10 0 10 5

Performance

Iteration

||x (k) −x * ||

2 F(x (k) ) − F(x * )

0 5 10 15 20 25 30

10 −20 10 −15 10 −10 10 −5 10 0 10 5 10 10

Performance

Iteration

||x (k) −x * ||

2 F(x (k) ) − F(x * )

Figure4.2.11: Performaneofdiretoptimization ofthene model('diret'

left,'diretd'right)

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5 0 0.5 1 1.5 2

Iteration sequence

x 1 x 2

Iteration points

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2

−2

−1.5

−1

−0.5 0 0.5 1 1.5 2

Iteration sequence

x 1 x 2

Iteration points

Figure 4.2.12: Iteration sequenes for diret optimization of the ne model

('diret'left,'diretd' right)

Summary of the Results

The regularization seems to have slightly negative eet on the on-vergene speed.

Thenormalizationfatorshavepratiallynoeetontheonvergene speed.

Theeetoftheweightingfatorsisnotpossibletoinvestigateforthis problem, beause the optimizer isfound, before thehosen weighting

strategy hasanyinuene.

The redution of the mapping parameters results in a muh slower

onvergene rate.

Theoptimalmappingparameters arepratially notinuenedbythe

regularization term.

Several of the input mapping parameters are not hanged from the