• Ingen resultater fundet

Model Predictive Control

Model Predictive Control, or MPC, is an advanced method of process control that has been in use in the process industries such as chemical plants and oil reneries since the 1980s. MPC as the name suggests use explicit models of the plant to predict the future behavior of the controlled variables. Based on the prediction, the controller calculates the future moves on manipulated variables by solving the optimization problem online. Here the controller tries to minimize the error between predicted and the actual value over a control horizon and the rst control action is being implemented. Model predictive controllers rely on dynamic models of the process, most often linear empirical models obtained by system identication. MPC is also referred to as receding horizon control or moving horizon control (Qin and Badgwell, 2003).

Figure 2.1: Model Predictive Control Scheme.

9

Figure 2.1 also makes it clear, that the behavior of an MPC system can be quite complicated, because the control action is determined as the result of the on-line optimization problem. The problem is constructed on the basis of a process model and process measurements. Process measurements provide the feedback (and, optionally, feed-forward) element in the MPC structure. Figure 2.1 shows the structure of a typical MPC system. Normally dierent types of MPCs provide dierent approaches in handling the following.

Input-output model,

disturbance prediction,

objective function,

measurement,

constraints, and

sampling period (how frequently the on-line optimization problem is solved).

Regardless of the particular choice made for the above elements, on-line optimiza-tion is the common thread tying them together.

2.2.1 Elements of MPC

All the MPC algorithms possess common elements and dierent options can be chosen for each element giving rise to dierent algorithms. These elements are

Prediction Model

Objective Function and

Control Law

Prediction Model

The model is the cornerstone of MPC; a complete design should include the neces-sary mechanisms for obtaining the best possible model, which should be complete enough to fully capture the process dynamics and allow the predictions to be cal-culated, and at the same time to be intuitive and permit theoretic analysis. The

10

use of the process model is determined by the necessity to calculate the predicted output at future instants. The dierent strategies of MPC can use various mod-els to represent the relationship between the outputs and the measurable inputs, some of which are manipulated variables and others are measurable disturbances which can be compensated by feed forward actions. Some of the available types of models are

Finite impulse response model

Step response model

State space model

Transfer function descriptions like AR(MA)X models

Auto- Regression with external input (ARX) model

Various types of models are used with MPC, with the FIR (Finite Impulse Re-sponse) or Step response models and ARX (Auto-Regressive with eXternal inputs) models being the most common in industrial practice. Step or impulse response models are non- parametric models that are widely used in industries. The advan-tage of such models are, they reveal plant time constant, gain and delay directly from the process graphs. Also FIR models requires less prior information than transfer function models.Also FIR models need the information of only settling time which can be easily attained. These are the main advantages of using FIR models where the plant has many input- output variables and has complicated dynamic responses due to interactions.

But the disadvantages of FIR models are that they can be used for only stable systems and is dicult to be used in identifying processes with slow dynamics.

In such cases, transfer functions models are used where the dynamics are slow and can be converted into any form like ARX for linear systems and ARMAX model for non-linear applications. But the model mismatch could cause bias in the estimated parameters.

State space model formulation can be used to augment the model easily with additional states to represent the eect of disturbances. Also it can be provided in both linear and non-linear form. These are easy to determine the system both

11

in continuous form and discrete form but it is quite dicult to determine the state space models in real time.

Objective Function

The various MPC algorithm propose dierent cost functions for obtaining the control law. The general aim is that the future output on the considered horizon should follow a determined reference signal and at the particular constraint. The objective functions are either minimization or maximization problems depending on the application. Normally cost functions used in process controls are minimiza-tion funcminimiza-tions with some inequality constraints.

Obtaining the Control Law

In order to obtain values it is necessary to minimize the functional part of the ob-jective function. To do this, the values of the predicted outputs are calculated as a function of past values of inputs and outputs and future control signals making use of the model chosen and substituted in the cost function, obtaining an expres-sion whose minimization leads to the looked for values. An analytical solution can be obtained for the quadratic criterion if the model is linear and there are no constraints, otherwise an iterative method of optimization is used.

2.2.2 Dynamic Matrix Control

Dynamic Matrix Control (DMC) was the rst Model Predictive Control (MPC) algorithm introduced in early 1980s. Nowadays, DMC is available in almost all commercial industrial distributed control systems and process simulation software packages. The original work on DMC have been proposed by Cutler and Ramakar (1980). A detailed review on DMC control techniques have been provided by Camacho and Bordons (1999, 2004). DMC control is based on a discrete time step response model that calculates a desired value of the manipulated value that remains unchanged during the next time step. The new value of the manipulated variable is calculated to give the smallest sum of squares error between the set point and the predicted value of the controlled variable. The number of time

12

steps the DMC uses for its prediction is called the "Prediction Horizon".

Prediction:

A brief overview of Dynamic Matrix Control has been given by Chidambaram (2003). The dynamic model used to predict the future values of the controlled variable is represented by a vector, A, whose elements are dened as

ai = ∆u(t∆y(ti)

0)

where ∆y(ti) = y(ti)−y(t0),

y(t) is the value of the controlled variable at time t

∆u(t0) is the change in manipulated variable at t0. The prediction values along the horizon will be

yk=

N 1

[ai∆u(k−i)] +aNu(k−N 1) +d(k) (2.1) The present value of disturbance is estimated by the dierence between present measurement output and the eects of past inputs is calculated as

d(k) = ymeas(k)

N 1

[ai∆u(k−i)]−aNu(k−N 1) (2.2) Thus the linear estimate of the future output can be written in a matrix notation

ylin =ypast+A∆u+d

where ylin= [y(k+ 1), y(k+ 2), . . . y(k+p)]T and d= [d(k+ 1), d(k+ 2), . . . d(k+p)]T

Since future values of d(k+i) are not available, the above estimate is used and it is assumed to the same over the future sampling instants. A more accurate estimate of the d(k+i) is possible, provided the load disturbance is measured and a reliable load disturbance to measured output model is available.

The eects of the known past inputs on the future output is dened by the vector ypast. A is the dynamic matrix composed of step response coecients as explained above. P denotes the length of prediction horizon and M is the moving horizon

13

of the number of future moves∆u(k), . . . ,∆u(k+m−1)calculated by the DMC algorithm. With these denitions, the future output is predicted for any given vector of future control moves ∆u.

For calculating the control inputs the following control objective is used

min∆u E

P i=1

γ2(i)[ysp(k+i)−ylin(k+i)]2+

M j=1

λ2[∆u(K+M −j)]2 (2.3)

where γ and λ are time varying weights in the output error and on change in input, respectively. The least square solution for the above problem is given by

∆u= [ATΓTΓA+ ΛTΛ]1ATΓTΓ(ysp−ypast−d) (2.4) usually the rst calculated ∆u is implemented and the calculations are repeated at the next sampling instant.

2.2.3 DMC tuning strategy

Since most of the process are represented by FOPDT models. The tuning method (Shridhar and Cooper, 1997) suggested as below.

1. It is assumed the system is of the form y(s)

u(s) = Kp

τps+ 1eθps (2.5) 2. With the above transfer function model, rst the sampling time is decided

by satisfyingT 0.1τp and T 0.5θp

3. Then the discrete dead time is calculated as k = θTp + 1

4. The prediction horizon and the model horizon as the process settling time in samples is calculated as P =N = Tp +k

5. The control horizon M is an integer in the range of 1 to 6 6. The move suppression coecient is given by

f = 0 M = 1

f = 500M (3.5τTp) + 2(M21) M > 1 14

7. Implement DMC using the traditional step response matrix of the actual process and the following parameters computed in steps 1-5:

sample time, T

model horizon (process settling time in samples), N

prediction horizon (optimization horizon), P

control horizon (number of moves), M

move suppression coecient, λ

Tuning of unconstrained SISO DMC is challenging because of the number of ad-justable parameters that aect closed-loop performance. Practical limitations of-ten restrict the availability of sample time, T, as a tuning parameter.

Nevertheless moving horizon principle is the widely used technique in real time control.

2.2.4 Principle of moving horizon MPC

An excellent overview of the state of the art on moving horizon based MPC is given by Garcia et al. (1989), Camacho and Bordons (2004) and Goodwin et al.

(2004). Model predictive control systems consists of an estimator and a regulator as illustrated in Figure 2.2. The inputs to the MPC are the target values, r, for the process outputs, z, and the measured process outputs, y. The output from the MPC is the manipulated variables, u.

MPC

y r u

ˆ x

Regulator

Estimator

Plant

Sensors, Lab analysis

Figure 2.2: Generic model predictive control system.

The principle of moving horizon is given in Figure 2.3. MPC is based on iterative, nite horizon optimization of a plant model. At time t the current plant state is

15

sampled and a cost minimizing control strategy is computed via a numerical mini-mization algorithm as given in Equation (2.6) for a relatively short time horizon in the future which is called as control horizonNr. Specically, an online calculation is used to estimate the projected trajectory over period of prediction horizon Ne and nd a cost-minimizing control strategy until the length of control horizon.

Only the rst step of the control strategy is implemented, then the plant state is sampled again and the calculations are repeated starting from the now current state, yielding a new control and new predicted state path. The prediction hori-zon keeps shifting forward and for this reason this is called as receding or moving horizon control.

Figure 2.3: Principle of Moving Horizon MPC.

Normally MPCs are equipped with constraints on the manipulated inputs and outputs. Constraints can be of two types: Hard constraints and soft constraints.

Hard constraints represent absolute limitations imposed on the system. These names illustrate that hard constraints are to be necessarily satised and cannot be violated. Soft constraints only express a preference of some solutions that can be violated and is normally penalized heavily once they are violated. The optimization methods for solving predictive control algorithms are described in Maciejowski (2002).

16