• Ingen resultater fundet

C.3 Implementation Issues for Hidden Markov Models

C.3.1 Scaling

When calculating the forward and backward probabilities for long observation sequences, we often run into problems of representing the forward and backward

C.3 Implementation Issues for Hidden Markov Models 129 probabilities within the normal range of double precision values in the computer.

To cope with these problems, we now introduce a way of scaling the forward and backward probabilities, such that we are able to calculate the probabilities within normal double precision values in the computer.

Scaling the Forward Probabilities

The new scaled version of the Forward-Backward algorithm uses a scaling co-efficient denoted ct for every time step t or observation symbol Ot. The first scaling coefficient is defined by:

c1= 1 PN

i=1α1(i), (C.49)

whereas the rest of the scaling coefficients are defined as:

ct= 1 PN

i=1αˆˆt(i). (C.50)

The notation of ˆˆαt(i) denote that the forward probability is still not scaled for time t, but for everyt−1, t2,· · ·,1, whereas the notation ˆαt(i) denote that the probability is scaled for every t. Non-scaled forward probabilities will just be denoteαt(i) as before.

With the new notation in our hands, we are now able to define the scaled Forward-Backward algorithm:

To illustrate how the algorithm works, we take an example using the HMM presented in section C.1.3 on page 110. First we calculate α1(i) as normal by the initialisation step given in equation (C.11) on page 113:

α1(1) = π1b1(1) = 1/3×2/5 = 2/15 α1(2) = π2b2(1) = 1/3×4/6 = 4/18 α1(3) = π3b3(1) = 1/3×1/6 = 1/18,

then we calculate the scaling coefficient as the sum of allα1(i) for every statei:

c1 = 1 and finally we scale all the forward values using (C.52):

ˆ then we calculate the scaling valuec2:

c2 = 1

ˆˆ

α2(1) + ˆˆα2(2) + ˆˆα2(3) = 1

1/15 + 0 + 1/6 = 1 7/30, and finally use equation (C.55) to find the scaled forward values:

ˆ

C.3 Implementation Issues for Hidden Markov Models 131 To find the values of ˆα3(i) we do the following calculations:

ˆˆ

If we compare the above calculations to the calculations done for the normal Forward-Backward algorithm without scaling in section C.2.2 on page 115, we can see that the scaled probabilities now are normalised such thatPN

i=0αˆt(i) = 1, enabling us to represent the forward probabilities within reasonable bounds for even large ts.

If we look at the scaled version of calculating the forward probabilities ˆαt(i), we can express it in terms of the non-scaled probabilitiesαt(i) as given in equation (C.11), (C.12) on page 113 and the scaling coefficients.

ˆ

Furthermore it can be proven by induction that:

ˆ

To see why this is so, we take a look at the calculations for ˆα2(j) in a HMM with three states, and observe how the scaling coefficients can be extracted from the summations:

With the definition of ˆαt(j) given in equation(C.57) and the observation made in equation (C.56) we can now see that:

ˆ

So, with the scaling applied to the Forward-Backward algorithm, we are now guaranteed that all forward values are normalised, within [0,1], and thatPN

i=0αˆt(i) = 1.

Now that we have scaled the forward values, we are no longer able to compute the valueP(O|λ) by equation (C.13) given on page 114. But if we look at the propertyPN

i=0αˆt(i) = 1 for the scaled forward values fort=T, we can instead

C.3 Implementation Issues for Hidden Markov Models 133

Here we compute the log likelihood log[P(O|λ)] instead of P(O|λ), because P(O|λ) will properly be very small, and not within the boundary of double precision values in the computer.

Scaling the Backward Probabilities

We now turn to the scaling of the backward probabilities. This is done in quite the same way as with the forward probabilities, but instead of calculating new scale coefficients for all the backward probabilities, we just re-use the same scaling coefficients as found by the scaled version of the Forward-Backward algorithm, keeping the calculations to a minimum.

step 1: Initialisation. To illustrate how the algorithm works, we take the example HMM presented in section C.1.3 on page 110 and calculate the ˆβvalues for the observation sequence O=RBGusing the scaling coefficients found by the scaled Forward-Backward algorithm.

βˆ3(1) = c3= 1

16/45 = 45/16 βˆ3(2) = c3= 45/16

βˆ3(3) = c3= 45/16

βˆ2(1) = c2×[a11b1(2) ˆβ3(1) +a12b2(2) ˆβ3(2) +a13b3(2) ˆβ3(3)]

= 1/3×2/5×45/16 + 1/3×2/6×45/16 + 1/3×2/6×45/16 7/30

= 3/8 + 5/16 + 5/16

7/30 = 30/7

βˆ2(2) = c2×[a21b1(2) ˆβ3(1) +a22b2(2) ˆβ3(2) +a23b3(2) ˆβ3(3)]

= 30/7

βˆ2(3) = c2×[a31b1(2) ˆβ3(1) +a32b2(2) ˆβ3(2) +a33b3(2) ˆβ3(3)]

= 30/7

βˆ1(1) = c1[a11b1(3) ˆβ2(1) +a12b2(3) ˆβ2(2) +a13b3(3) ˆβ2(3)]

= 1/3×1/5×30/7 + 1/3×0/6×30/7 + 1/3×3/6×30/7 37/90

= 2/7 + 0 + 5/7

37/90 = 90/37

βˆ1(2) = c1[a21b1(3) ˆβ2(1) +a22b2(3) ˆβ2(2) +a23b3(3) ˆβ2(3)]

= 90/37

βˆ1(3) = c1[a31b1(3) ˆβ2(1) +a32b2(3) ˆβ2(2) +a33b3(3) ˆβ2(3)]

= 90/37

When using the scaling coefficients found from forward values to also scale the backward values with, we are forced to always calculate the scaled forward values before we can calculate the scaled backward values. This should be kept in mind when implementing the scaled versions of the Forward-Backward and Backward-Forward algorithms.

As with the scaled forward variables, we can express the scaled backward vari-ables in terms of the non-scaled backwards varivari-ables and the product of the scaling coefficients.

βˆt(i) = µYT

τ=t

cτ

βt(i) =Dtβt(i) (C.66)

Re-Estimating Using Scaled αs andβs

We now turn to the re-estimating of the Hidden Markov Model parameters using the scaled forward and backward probabilities.

When re-estimating the initial state probabilitiesπ, we need to calculateγ1(i), but how do we find the γt(i) values from the scaled αs and βs. As seen in

C.3 Implementation Issues for Hidden Markov Models 135

section C.2.4 from equation (C.22) on page 120 we defined γt(i) as:

γt(i) = αt(i)βt(i)

Furthermore, the terms CtandDtcan be expressed as:

CtDt = enabling us to express γt(i) in terms of the scaled forward and backward prob-abilities, and the scale coefficient as:

γt(i) = αˆt(i) ˆβt(i)

The re-estimation of the initial state probability distributionπis therefore given by:

πi = γ1(i) (C.75)

= αˆ1(i) ˆβ1(i) c1

(C.76) When re-estimating the state transition probabilities aij using the scaled for-ward and backfor-ward values, we simply use the following equation:

aij = To see why this gives the correct re-estimation, we re-write the equation in terms ofαs andβs as defined by equation (C.57) and (C.66):

aij =

The product of the two termsCtandDt+1 can by expressed by: which is independent oftand can therefore be cancel out in both the numerator and the denominator: resulting in the original re-estimation ofaij as presented in equation (C.48).

The re-estimation of the observation probabilitiesbj(k) is quite simple, because it is expressed in terms ofγt(i) values and we have already shown how to com-pute theγt(i) values using the scaled forward and backward values.