• Ingen resultater fundet

Continuous Time Stochastic Modelling

N/A
N/A
Info
Hent
Protected

Academic year: 2022

Del "Continuous Time Stochastic Modelling"

Copied!
36
0
0

Indlæser.... (se fuldtekst nu)

Hele teksten

(1)

Continuous Time Stochastic Modelling

CTSM 2.3 -

Mathematics Guide

Niels Rode Kristensen, Henrik Madsen

December 10, 2003

Technical University of Denmark

(2)
(3)

1 The mathematics behind the algorithms of CTSM 1

1.1 Parameter estimation . . . 1

1.1.1 Model structures . . . 1

1.1.2 Parameter estimation methods . . . 2

1.1.3 Filtering methods . . . 5

1.1.4 Data issues . . . 20

1.1.5 Optimisation issues . . . 22

1.1.6 Performance issues . . . 26

1.2 Other features . . . 26

1.2.1 Various statistics . . . 26

1.2.2 Validation data generation . . . 28

References 31

(4)
(5)

The mathematics behind the algorithms of CTSM

The following is a complete mathematical outline of the algorithms of CTSM.

1.1 Parameter estimation

The primary feature in CTSM is estimation of parameters in continuous- discrete stochastic state space models on the basis of experimental data.

1.1.1 Model structures

CTSMdifferentiates between three different model structures for continuous- discrete stochastic state space models as outlined in the following.

1.1.1.1 The nonlinear model

The most general of these model structures is thenonlinear (NL) model, which can be described by the following equations:

dxt=f(xt,ut, t,θ)dt+σ(ut, t,θ)dωt (1.1) yk=h(xk,uk, tk,θ) +ek (1.2) where t∈Ris time,xt∈ X ⊂Rn is a vector of state variables,ut∈ U ⊂Rm is a vector of input variables, yk∈ Y ⊂Rl is a vector of output variables, θ∈ΘRp is a vector of parameters, f(·)Rn, σ(·)∈Rn×n and h(·)∈Rl are nonlinear functions,t}is ann-dimensional standard Wiener process and {ek} is anl-dimensional white noise process withek ∈N(0,S(uk, tk,θ)).

(6)

1.1.1.2 The linear time-varying model

A special case of the nonlinear model is thelinear time-varying (LTV) model, which can be described by the following equations:

dxt= (A(xt,ut, t,θ)xt+B(xt,ut, t,θ)ut)dt+σ(ut, t,θ)dωt (1.3) yk =C(xk,uk, tk,θ)xk+D(xk,uk, tk,θ)uk+ek (1.4) where t∈Ris time, xt∈ X ⊂Rn is a state vector, ut∈ U ⊂Rm is an input vector,yk ∈ Y ⊂Rlis an output vector,θ∈ΘRpis a vector of parameters, A(·)∈Rn×n, B(·)∈Rn×m, σ(·)∈Rn×n, C(·)∈Rl×n and D(·)∈Rl×m are nonlinear functions, t} is an n-dimensional standard Wiener process and {ek} is anl-dimensional white noise process withek ∈N(0,S(uk, tk,θ)).

1.1.1.3 The linear time-invariant model

A special case of the linear time-varying model is the linear time-invariant (LTI) model, which can be described by the following equations:

dxt= (A(θ)xt+B(θ)ut)dt+σ(θ)dωt (1.5) yk=C(θ)xk+D(θ)uk+ek (1.6) where t∈Ris time, xt∈ X ⊂Rn is a state vector, ut∈ U ⊂Rm is an input vector,yk ∈ Y ⊂Rlis an output vector,θ∈ΘRpis a vector of parameters, A(·)∈Rn×n, B(·)∈Rn×m, σ(·)∈Rn×n, C(·)∈Rl×n and D(·)∈Rl×m are nonlinear functions, t} is an n-dimensional standard Wiener process and {ek} is anl-dimensional white noise process withek ∈N(0,S(θ)).

1.1.2 Parameter estimation methods

CTSM allows a number of different methods to be applied to estimate the parameters of the above model structures as outlined in the following.

1.1.2.1 Maximum likelihood estimation

Given a particular model structure, maximum likelihood (ML) estimation of the unknown parameters can be performed by finding the parameters θ that maximize the likelihood function of a given sequence of measurementsy0,y1, . . . ,yk, . . . ,yN. By introducing the notation:

Yk= [yk,yk−1, . . . ,y1,y0] (1.7) the likelihood function is the joint probability density:

L(θ;YN) =p(YN|θ) (1.8)

(7)

or equivalently:

L(θ;YN) = Ã N

Y

k=1

p(yk|Yk−1,θ)

!

p(y0|θ) (1.9)

where the rule P(A∩B) =P(A|B)P(B) has been applied to form a product of conditional probability densities. In order to obtain an exact evaluation of the likelihood function, the initial probability densityp(y0|θ) must be known and all subsequent conditional densities must be determined by successively solving Kolmogorov’s forward equation and applying Bayes’ rule (Jazwinski, 1970), but this approach is computationally infeasible in practice. However, since the diffusion terms in the above model structures do not depend on the state variables, a simpler alternative can be used. More specifically, a method based on Kalman filtering can be applied for LTI and LTV models, and an approximate method based on extended Kalman filtering can be applied for NL models. The latter approximation can be applied, because the stochastic differential equations considered are driven by Wiener processes, and because increments of a Wiener process are Gaussian, which makes it reasonable to assume, under some regularity conditions, that the conditional densities can be well approximated by Gaussian densities. The Gaussian density is completely characterized by its mean and covariance, so by introducing the notation:

ˆ

yk|k−1=E{yk|Yk−1,θ} (1.10)

Rk|k−1=V{yk|Yk−1,θ} (1.11)

and:

²k=yk−yˆk|k−1 (1.12)

the likelihood function can be written as follows:

L(θ;YN) =

 YN k=1

exp

³

12²TkR−1k|k−1²k

´ pdet(Rk|k−1)¡√

2π¢l

p(y0|θ) (1.13) where, for given parameters and initial states,²k andRk|k−1can be computed by means of a Kalman filter (LTI and LTV models) or an extended Kalman filter (NL models) as shown in Sections 1.1.3.1 and 1.1.3.2 respectively. Further conditioning on y0 and taking the negative logarithm gives:

ln (L(θ;YN|y0)) = 1 2

XN k=1

³

ln(det(Rk|k−1)) +²TkR−1k|k−1²k

´

+1 2

à N X

k=1

l

! ln(2π)

(1.14)

and ML estimates of the parameters (and optionally of the initial states) can now be determined by solving the following nonlinear optimisation problem:

θˆ= arg min

θ∈Θ{−ln (L(θ;YN|y0))} (1.15)

(8)

1.1.2.2 Maximum a posteriori estimation

If prior information about the parameters is available in the form of a prior probability density function p(θ), Bayes’ rule can be applied to give an im- proved estimate by forming the posterior probability density function:

p(θ|YN) = p(YN|θ)p(θ)

p(YN) ∝p(YN|θ)p(θ) (1.16) and subsequently finding the parameters that maximize this function, i.e. by performing maximum a posteriori (MAP) estimation. A nice feature of this expression is the fact that it reduces to the likelihood function, when no prior information is available (p(θ) uniform), making ML estimation a special case of MAP estimation. In fact, this formulation also allows MAP estimation on a subset of the parameters (p(θ) partly uniform). By introducing the notation1:

µθ=E{θ} (1.17)

Σθ=V{θ} (1.18)

and:

²θ=θ−µθ (1.19)

and by assuming that the prior probability density of the parameters is Gaus- sian, the posterior probability density function can be written as follows:

p(θ|YN)

 YN k=1

exp

³

12²TkR−1k|k−1²k

´ pdet(Rk|k−1)¡√

2π¢l

p(y0|θ)

×exp¡

12²TθΣ−1θ ²θ

¢ pdet(Σθ)¡√

2π¢p

(1.20)

Further conditioning ony0 and taking the negative logarithm gives:

ln (p(θ|YN,y0)) 1 2

XN k=1

³

ln(det(Rk|k−1)) +²TkR−1k|k−1²k

´

+1 2

ÃÃN X

k=1

l

! +p

! ln(2π) +1

2ln(det(Σθ)) +1

2²TθΣ−1θ ²θ

(1.21)

and MAP estimates of the parameters (and optionally of the initial states) can now be determined by solving the following nonlinear optimisation problem:

ˆθ= arg min

θ∈Θ{−ln (p(θ|YN,y0))} (1.22)

1In practiceΣis specified asΣ=R, whereis a diagonal matrix of the prior standard deviations andR is the corresponding prior correlation matrix.

(9)

1.1.2.3 Using multiple independent data sets

If, instead of a single sequence of measurements, multiple consecutive, but yet separate, sequences of measurements, i.e. YN11, YN22, . . . , YNii, . . . , YNSS, are available, a similar estimation method can be applied by expanding the expression for the posterior probability density function to the general form:

p(θ|Y)∝

 YS i=1

Ni

Y

k=1

exp³

12ik)T(Rik|k−1)−1²ik´ q

det(Rik|k−1)¡√

2π¢l

p(yi0|θ)

×exp¡

12²TθΣ−1θ ²θ

¢ pdet(Σθ)¡√

2π¢p

(1.23)

where:

Y= [YN11,YN22, . . . ,YNii, . . . ,YNSS] (1.24) and where the individual sequences of measurements are assumed to be stochas- tically independent. This formulation allows MAP estimation on multiple data sets, but, as special cases, it also allows ML estimation on multiple data sets (p(θ) uniform), MAP estimation on a single data set (S= 1) and ML estimation on a single data set (p(θ) uniform,S= 1). Further conditioning on:

y0= [y10,y20, . . . ,yi0, . . . ,yS0] (1.25) and taking the negative logarithm gives:

ln (p(θ|Y,y0))1 2

XS i=1

Ni

X

k=1

³

ln(det(Rik|k−1)) + (²ik)T(Rik|k−1)−1²ik

´

+1 2

ÃÃ S X

i=1 Ni

X

k=1

l

! +p

! ln(2π) +1

2ln(det(Σθ)) +1

2²TθΣ−1θ ²θ

(1.26)

and estimates of the parameters (and optionally of the initial states) can now be determined by solving the following nonlinear optimisation problem:

θˆ= arg min

θ∈Θ{−ln (p(θ|Y,y0))} (1.27)

1.1.3 Filtering methods

CTSMcomputes the innovation vectors²k (or²ik) and their covariance matri- cesRk|k−1(orRik|k−1) recursively by means of a Kalman filter (LTI and LTV models) or an extended Kalman filter (NL models) as outlined in the following.

(10)

1.1.3.1 Kalman filtering

For LTI and LTV models²k (or²ik) andRk|k−1 (orRik|k−1) can be computed for a given set of parametersθ and initial statesx0 by means of a continuous- discrete Kalman filter, i.e. by means of the outputprediction equations:

ˆ

yk|k−1=Cxˆk|k−1+Duk (1.28)

Rk|k−1=CPk|k−1CT +S (1.29)

theinnovation equation:

²k=yk−yˆk|k−1 (1.30)

the Kalman gain equation:

Kk =Pk|k−1CTR−1k|k−1 (1.31)

theupdating equations:

ˆ

xk|k = ˆxk|k−1+Kk²k (1.32)

Pk|k =Pk|k−1−KkRk|k−1KTk (1.33) and the state prediction equations:

dˆxt|k

dt =Aˆxt|k+But,t∈[tk, tk+1[ (1.34) dPt|k

dt =APt|k+Pt|kAT +σσT ,t∈[tk, tk+1[ (1.35) where the following shorthand notation applies in the LTV case:

A=A(ˆxt|k−1,ut, t,θ) ,B=B(ˆxt|k−1,ut, t,θ) C=C(ˆxk|k−1,uk, tk,θ) ,D=D(ˆxk|k−1,uk, tk,θ)

σ=σ(ut, t,θ) ,S=S(uk, tk,θ)

(1.36)

and the following shorthand notation applies in the LTI case:

A=A(θ) ,B=B(θ) C =C(θ) ,D=D(θ) σ=σ(θ) ,S =S(θ)

(1.37)

Initial conditions for the Kalman filter are ˆxt|t0 =x0 and Pt|t0 =P0, which may either be pre-specified or estimated along with the parameters as a part of the overall problem (see Section 1.1.3.4). In the LTI case, and in the LTV case, ifA,B,C, D,σ andS are assumed constant between samples2, (1.34)

2In practice the time intervalt[tk, tk+1[ is subsampled for LTV models, andA,B,C, D,andSare evaluated at each subsampling instant to provide a better approximation.

(11)

and (1.35) can be replaced by their discrete time counterparts, which can be derived from the solution to the stochastic differential equation:

dxt= (Axt+But)dt+σdωt,t∈[tk, tk+1[ (1.38) i.e. from:

xtk+1=eA(tk+1−tk)xtk+ Z tk+1

tk

eA(tk+1−s)Busds+ Z tk+1

tk

eA(tk+1−s)σdωs (1.39) which yields:

ˆ

xk+1|k=E{xtk+1|xtk}=eA(tk+1−tk)xˆk|k+ Z tk+1

tk

eA(tk+1−s)Busds (1.40)

Pk+1|k=E{xtk+1xTtk+1|xtk}=eA(tk+1−tk)Pk|k³

eA(tk+1−tk)´T +

Z tk+1

tk

eA(tk+1−s)σσT

³

eA(tk+1−s)

´T ds

(1.41)

where the following shorthand notation applies in the LTV case:

A=A(ˆxk|k−1,uk, tk,θ) ,B=B(ˆxk|k−1,uk, tk,θ) C=C(ˆxk|k−1,uk, tk,θ) ,D=D(ˆxk|k−1,uk, tk,θ)

σ=σ(uk, tk,θ) ,S =S(uk, tk,θ)

(1.42)

and the following shorthand notation applies in the LTI case:

A=A(θ) ,B=B(θ) C =C(θ) ,D=D(θ) σ=σ(θ) ,S =S(θ)

(1.43)

In order to be able to use (1.40) and (1.41), the integrals of both equations must be computed. For this purpose the equations are rewritten to:

ˆ

xk+1|k =eA(tk+1−tk)xˆk|k+ Z tk+1

tk

eA(tk+1−s)Busds

=esxˆk|k+ Z tk+1

tk

eA(tk+1−s)B(α(s−tk) +uk)ds

sxˆk|k+ Z τs

0

eAsB(α(τs−s) +uk)ds

sxˆk|k Z τs

0

eAssdsBα+ Z τs

0

eAsdsB(ατs+uk)

(1.44)

(12)

and:

Pk+1|k=eA(tk+1−tk)Pk|k

³

eA(tk+1−tk)

´T

+ Z tk+1

tk

eA(tk+1−s)σσT

³

eA(tk+1−s)

´T ds

=esPk|k

¡es¢T

+ Z τs

0

eAsσσT¡ eAs¢T

ds

sPk|kΦTs + Z τs

0

eAsσσT¡ eAs¢T

ds

(1.45)

where τs=tk+1−tk andΦs=es, and where:

α= uk+1−uk

tk+1−tk (1.46)

has been introduced to allow assumption of eitherzero order hold (α=0) or first order hold6=0) on the inputs between sampling instants. The matrix exponential Φs=es can be computed by means of a Pad´e approximation with repeated scaling and squaring (Moler and van Loan, 1978). However, both Φsand the integral in (1.45) can be computed simultaneously through:

exp

µ·−A σσT 0 AT

¸ τs

=

·H1s) H2s) 0 H3s)

¸

(1.47) by combining submatrices of the result3(van Loan, 1978), i.e.:

Φs=HT3s) (1.48)

and: Z τs

0

eAsσσT¡ eAs¢T

ds=HT3s)H2s) (1.49) Alternatively, this integral can be computed from the Lyapunov equation:

ΦsσσTΦTs −σσT =A Z τs

0

eAsσσT¡ eAs¢T

ds +

Z τs

0

eAsσσT¡ eAs¢T

dsAT

(1.50)

but this approach has been found to be less feasible. The integrals in (1.44) are not as easy to deal with, especially ifAis singular. However, this problem can be solved by introducing the singular value decomposition (SVD) ofA, i.e.

UΣVT, transforming the integrals and subsequently computing these.

3WithinCTSMthe specific implementation is based on the algorithms of Sidje (1998).

(13)

The first integral can be transformed as follows:

Z τs

0

eAssds=U Z τs

0

UTeAsUsdsUT =U Z τs

0

eAs˜ sdsUT (1.51)

and, ifAis singular, the matrix ˜A=ΣVTU =UTAUhas a special structure:

A˜ =

·A˜1 A˜2

0 0

¸

(1.52) which allows the integral to be computed as follows:

Z τs

0

eAs˜ sds= Z τs

0

à Is+

·A˜1 A˜2

0 0

¸ s2+

·A˜1 A˜2

0 0

¸2

s3 2 +· · ·

! ds

= Z τs

0

à Is+

·A˜1 A˜2

0 0

¸ s2+

"

A˜21 A˜1A˜2

0 0

# s3

2 +· · ·

! ds

=

"Rτs

0 eA˜1ssds Rτs

0 A˜−11

³

eA˜1s−I

´ sA˜2ds

0 Iτ2s2

#

=

"h

A˜−11 eA˜1s

³

Is−A˜−11

´iτs 0 0

A˜−11

hA˜−11 eA˜1s

³

Is−A˜−11

´

−Is22 iτs

0

A˜2

Iτ2s2

#

=

"

A˜−11

³

−A˜−11

³Φ˜1s−I

´ + ˜Φ1sτs

´ 0

A˜−11

³A˜−11

³

−A˜−11

³Φ˜1s−I

´ + ˜Φ1sτs

´

−Iτ2s2

´A˜2

Iτ2s2

#

(1.53)

where ˜Φ1s is the upper left part of the matrix:

Φ˜s=UTΦsU=

"

Φ˜1s Φ˜2s

0 I

#

(1.54)

The second integral can be transformed as follows:

Z τs

0

eAsds=U Z τs

0

UTeAsUdsUT =U Z τs

0

eAs˜ dsUT (1.55)

(14)

and can subsequently be computed as follows:

Z τs

0

eAs˜ ds= Z τs

0

à I+

·A˜1 A˜2

0 0

¸ s+

·A˜1 A˜2

0 0

¸2

s2 2 +· · ·

! ds

= Z τs

0

à I+

·A˜1 A˜2

0 0

¸ s+

"

A˜21 A˜1A˜2

0 0

#s2 2 +· · ·

! ds

=

"Rτs

0 eA˜1sds Rτs

0 A˜−11 ³

eA˜1s−I´ A˜2ds

0 s

#

=

"h

A˜−11 eA˜1s iτs

0

A˜−11

hA˜−11 eA˜1s−Is iτs

0

A˜2

0 s

#

=

"

A˜−11 ³

Φ˜1s−I´

A˜−11 ³ A˜−11 ³

Φ˜1s−I´

−Iτs

´A˜2

0 s

#

(1.56)

Depending on the specific singularity of A (see Section 1.1.3.3 for details on how this is determined in CTSM) and the particular nature of the inputs, several different cases are possible as shown in the following.

General case: Singular A, first order hold on inputs

In the general case, the Kalman filter prediction can be calculated as follows:

xˆj+1sxˆj−U Z τs

0

eAs˜ sdsUT+U Z τs

0

eAs˜ dsUTB(ατs+uj) (1.57) with:

Z τs

0

eAs˜ ds=

"

A˜−11 ³

Φ˜1s−I´

A˜−11 ³ A˜−11 ³

Φ˜1s−I´

−Iτs

´A˜2

0 s

#

(1.58) and:

Z τs

0

eAs˜ sds=

"

A˜−11

³

−A˜−11

³Φ˜1s−I

´ + ˜Φ1sτs

´ 0

A˜−11

³A˜−11

³

−A˜−11

³Φ˜1s−I

´ + ˜Φ1sτs

´

−Iτ2s2

´A˜2

Iτ2s2

# (1.59)

Special case no. 1: Singular A, zero order hold on inputs

The Kalman filter prediction for this special case can be calculated as follows:

xˆj+1sxˆj+U Z τs

0

eAs˜ dsUTBuj (1.60)

(15)

with:

Z τs

0

eAs˜ ds=

"

A˜−11 ³

Φ˜1s−I´

A˜−11 ³ A˜−11 ³

Φ˜1s−I´

−Iτs

´A˜2

0 s

#

(1.61)

Special case no. 2: Nonsingular A, first order hold on inputs

The Kalman filter prediction for this special case can be calculated as follows:

xˆj+1sxˆj Z τs

0

eAssdsBα+ Z τs

0

eAsdsB(ατs+uj) (1.62)

with: Z τs

0

eAsds=A−1s−I) (1.63)

and: Z τs

0

eAssds=A−1¡

−A−1s−I) +Φsτs

¢ (1.64)

Special case no. 3: Nonsingular A, zero order hold on inputs

The Kalman filter prediction for this special case can be calculated as follows:

ˆ

xj+1sxˆj+ Z τs

0

eAsdsBuj (1.65)

with: Z τs

0

eAsds=A−1s−I) (1.66)

Special case no. 4: Identically zero A, first order hold on inputs The Kalman filter prediction for this special case can be calculated as follows:

ˆ

xj+1= ˆxj Z τs

0

eAssdsBα+ Z τs

0

eAsdsB(ατs+uj) (1.67)

with: Z τs

0

eAsds=s (1.68)

and: Z τs

0

eAssds=s2

2 (1.69)

(16)

Special case no. 5: Identically zero A, zero order hold on inputs The Kalman filter prediction for this special case can be calculated as follows:

ˆ

xj+1= ˆxj+ Z τs

0

eAsdsBuj (1.70)

with: Z τs

0

eAsds=s (1.71)

1.1.3.2 Extended Kalman filtering

For NL models²k(or²ik) andRk|k−1 (orRik|k−1) can be computed for a given set of parameters θ and initial states x0 by means of a continuous-discrete extended Kalman filter, i.e. by means of the output prediction equations:

ˆ

yk|k−1=h(ˆxk|k−1,uk, tk,θ) (1.72)

Rk|k−1=CPk|k−1CT +S (1.73)

theinnovation equation:

²k=yk−yˆk|k−1 (1.74)

the Kalman gain equation:

Kk =Pk|k−1CTR−1k|k−1 (1.75)

theupdating equations:

ˆ

xk|k = ˆxk|k−1+Kk²k (1.76)

Pk|k =Pk|k−1−KkRk|k−1KTk (1.77) and the state prediction equations:

dˆxt|k

dt =fxt|k,ut, t,θ) ,t∈[tk, tk+1[ (1.78) dPt|k

dt =APt|k+Pt|kAT +σσT ,t∈[tk, tk+1[ (1.79) where the following shorthand notation has been applied4:

A= ∂f

∂xt

¯¯

¯¯

x=ˆxk|k−1,u=uk,t=tk

, C= ∂h

∂xt

¯¯

¯¯

x=ˆxk|k−1,u=uk,t=tk

σ=σ(uk, tk,θ) ,S =S(uk, tk,θ)

(1.80)

4WithinCTSMthe code needed to evaluate the Jacobians is generated through analytical manipulation using a method based on the algorithms of Speelpenning (1980).

(17)

Initial conditions for the extended Kalman filter are ˆxt|t0 =x0andPt|t0 =P0, which may either be pre-specified or estimated along with the parameters as a part of the overall problem (see Section 1.1.3.4). Being a linear filter, the extended Kalman filter is sensitive to nonlinear effects, and the approximate solution obtained by solving (1.78) and (1.79) may be too crude (Jazwinski, 1970). Moreover, the assumption of Gaussian conditional densities is only likely to hold for small sample times. To provide a better approximation, the time interval [tk, tk+1[ is therefore subsampled, i.e. [tk, . . . , tj, . . . , tk+1[, and the equations are linearized at each subsampling instant. This also means that direct numerical solution of (1.78) and (1.79) can be avoided by applying the analytical solutions to the corresponding linearized propagation equations:

dˆxt|j

dt =fxj|j−1,uj, tj,θ) +A(ˆxt−xˆj) +B(ut−uj),t∈[tj, tj+1[ (1.81) dPt|j

dt =APt|j+Pt|jAT +σσT,t∈[tj, tj+1[ (1.82) where the following shorthand notation has been applied5:

A= ∂f

∂xt

¯¯

¯¯

x=ˆxj|j−1,u=uj,t=tj

,B= ∂f

∂ut

¯¯

¯¯

x=ˆxj|j−1,u=uj,t=tj

σ=σ(uj, tj,θ) ,S =S(uj, tj,θ)

(1.83)

The solution to (1.82) is equivalent to the solution to (1.35), i.e.:

Pj+1|jsPj|jΦTs + Z τs

0

eAsσσT¡ eAs¢T

ds (1.84)

where τs=tj+1−tj andΦs=es. The solution to (1.81) is not as easy to find, especially ifAis singular. Nevertheless, by simplifying the notation, i.e.:

dˆxt

dt =f+A(ˆxt−xˆj) +B(ut−uj) ,t∈[tj, tj+1[ (1.85) and introducing:

α= uj+1−uj

tj+1−tj (1.86)

to allow assumption of eitherzero order hold(α=0) orfirst order hold6=0) on the inputs between sampling instants, i.e.:

dˆxt

dt =f+A(ˆxt−xˆj) +B(α(t−tj) +uj−uj) ,t∈[tj, tj+1[ (1.87)

5WithinCTSMthe code needed to evaluate the Jacobians is generated through analytical manipulation using a method based on the algorithms of Speelpenning (1980).

(18)

and by introducing the singular value decomposition (SVD) ofA, i.e.UΣVT, a solvable equation can be obtained as follows:

dˆxt

dt =f +UΣVTxt−xˆj) +Bα(t−tj) UTdˆxt

dt =UTf+UTUΣVTU UTxt−xˆj) +UTBα(t−tj) dzt

dt =UTf+ΣVTU(zt−zj) +UTBα(t−tj) dzt

dt = ˜f + ˜A(zt−zj) + ˜Bα(t−tj) ,t∈[tj, tj+1[

(1.88)

where the transformationzt=UTxˆthas been introduced along with the vector f˜ =UTf and the matrices ˜A=ΣVTU=UTAU and ˜B=UTB. Now, ifA is singular, the matrix ˜Ahas a special structure:

A˜ =

·A˜1 A˜2

0 0

¸

(1.89) which makes it possible to split up the previous result in two distinct equations:

dz1t

dt = ˜f1+ ˜A1(z1t−z1j) + ˜A2(z2t−z2j) + ˜B1α(t−tj),t∈[tj, tj+1[ dz2t

dt = ˜f2+ ˜B2α(t−tj),t∈[tj, tj+1[

(1.90)

which can then be solved one at a time for the transformed variables. Solving the equation forz2t, with the initial conditionz2t=tj =z2j, yields:

z2t =z2j+ ˜f2(t−tj) +1

2B˜2α(t−tj)2 ,t∈[tj, tj+1[ (1.91) which can then be substituted into the equation forz1t to yield:

dz1t

dt = ˜f1+ ˜A1(z1t−z1j) + ˜A2

µ

f˜2(t−tj) +1

2B˜2α(t−tj)2

+ ˜B1α(t−tj) ,t∈[tj, tj+1[

(1.92)

Introducing, for ease of notation, the constants:

E= 1

2A˜2B˜2α ,F = ˜A2f˜2+ ˜B1α,G= ˜f1−A˜1z1j (1.93) and the standard form of a linear inhomogenous ordinary differential equation:

dz1t

dt −A˜1z1t =E(t−tj)2+F(t−tj) +G,t∈[tj, tj+1[ (1.94)

Referencer

RELATEREDE DOKUMENTER

RDIs will through SMEs collaboration in ECOLABNET get challenges and cases to solve, and the possibility to collaborate with other experts and IOs to build up better knowledge

If Internet technology is to become a counterpart to the VANS-based health- care data network, it is primarily neces- sary for it to be possible to pass on the structured EDI

Using this model, the state and disturbance are then estimated from the plant measurement y k by means of a steady-state Kalman filter, which takes a par- ticularly simple form for

Hvis jordemoderen skal anvende empowerment som strategi i konsultationen, skal hun derfor tage udgangspunkt i parrets ressourcer og støtte dem i at styrke disse.. Fokus på

autisMesPeKtruMsforstyrrelse Efter opdagelsen af spejlneuroner og deres antagne funktion i forhold til at være grund- laget for at kunne føle empati og imitere andre, opstod

In general terms, a better time resolution is obtained for higher fundamental frequencies of harmonic sound, which is in accordance both with the fact that the higher

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

Ved at se på netværket mellem lederne af de største organisationer inden for de fem sektorer, der dominerer det danske magtnet- værk – erhvervsliv, politik, stat, fagbevægelse og