• Ingen resultater fundet

Applying regularisation to computed tomography

2.2 Inverse problems & regularisation

2.2.4 Applying regularisation to computed tomography

We turn our attention to the discrete CT Model 2.6. Recall that added noise, even if relatively small, made the discretised inversion formula of Theorem2.4 produce severely deteriorated solutions caused by the ill-posedness of the prob-lem. Hence, we are motivated to apply regularisation to the probprob-lem.

We test on the Shepp–Logan phantom shown in Figure 2.5. The generated fan-beam data can be seen as a sinogram in Figure 2.6. The data is generated from fan-beam projections using the physical parameters in Table2.1. We now describe three methods that add regularisation to the problem and show how they perform on this set-up.

0 0.2 0.4 0.6 0.8 1

Figure 2.5: Shepp–Logan phantom used for simulating measurement data.

The interior of the blue circle indicates the area which is illu-minated from all source positions by X-rays defined for the set-up in Table2.1. In this case the entire object is illuminated from all source positions.

Source location

Detectorpixel

0 2 4 6 8 10 12

(a)Sinogram generated from forward model.

Source location

Detectorpixel

0 2 4 6 8 10 12

(b)Simulated sinogram with added 2%relative noise.

Figure 2.6: Sinograms generated from the object in Figure2.5using the mea-surement set-up in Table2.1.

2.2.4.1 Tikhonov regularisation

Tikhonov regularisation takes the form of the optimisation problem in (2.8) with a squared 2-norm data fidelity and regularisation term as follows:

x|α=argmin

x

kAx−bδk22+αkxk22 .

The method is quite popular as a regularisation technique for the discrete CT Model2.6, because of the combination of an intuitive smoothness prior, kxk22, and the fact that it can be rewritten, and hence implemented, as a least squares problem (see e.g. [5]).

The need for parameter tuning for Tikhonov is apparent in the regularisation parameter,α. If the parameter is chosen too small, i.e. α→0, we get a solution corrupted by noise. On the other hand, if αis chosen too large, i.e. α→ ∞, we get a solution that is smoothed to zero. The idea is thus to choose a reg-ularisation parameter in-between that gives the best reconstructed image. For this simulated problem, we know the real solution and so we can compare with the relative error and mutual information measures as described in Definition 2.7. This is illustrated in Figure2.7for a number of regularisation parameters.

From this we see that choosing α= 2 gives the lowest error. In Figure2.8 we

2.2 Inverse problems & regularisation 21

see a comparison of the image quality in the extreme cases ofαtoo low or too high compared toα= 2; just right.

Regularisation parameter

10-4 10-3 10-2 10-1 100 101 102 103 104 105

Error

0 0.5 1 1.5 2 2.5

3 Tikhonov: Error measures

RE 1 - MI

Figure 2.7: Error measures (see Definition2.7 p. 15) depending on regulari-sation parameter for Tikhonov regulariregulari-sation in Section2.2.4.1on the data in Figure 2.6. We see that there is an optimal choice of regularisation parameter for both error measures around 2 for this particular problem.

-4 -3 -2 -1 0 1 2 3

(a)α= 0.0001, RE=2.71, MI= 0.02.

0 0.2 0.4 0.6 0.8 1

(b) α= 2, RE= 0.28, MI=0.74.

×10-3

2 3 4 5 6

(c) α= 200, RE= 0.99, MI= 0.56.

Figure 2.8: Reconstructed images using Tikhonov regularisation on the simu-lated data in Figure2.6with regularisation parameters as shown.

2.2.4.2 Semi converging iterative methods

The semi converging methods arise from iterative regularisation methods that let the number of iterations play the role of regularisation parameter. Typi-cally these methods converge towards an undesired solution. The interesting thing, however, is on the way to this solution some of the methods exhibit semi-convergence. That is, they approach some solution, x, that is the best approximation to the ground truth possible for the method and then diverge again towards the undesired solution. For the purpose of this thesis, we state the simplest of these methods, namely the Landweber iterative method which takes the form:

x[k+1]=x[k]+sAT(bδ−Ax[k]),

wheresis a step size parameter that must satisfy0< s <2kATAk−12 .

There is much to say on the iterative methods that exhibit semi-convergence (see e.g. [5]), but for the purpose of this thesis, we simply show how the iteration number acts as a regularisation parameter. In Figure 2.9 we see the change in error measures as the iteration number increases. Indeed the method shows semi-convergence and we see that choosing to stop iterating around the 100th iteration gives the best solution. In Figure 2.10the reconstructed images are shown for three choices of iteration numbers.

2.2.4.3 Filtered back projection

Finally, we consider the discrete inversion formula (2.6). By modifying the filter to not amplify high frequent noise, we add regularisation to the problem. Denote byΛΓ a modified filtering step such that ΛΓ =F−1Γ(·)F, for some frequency filterΓ :R→R. The regularised solution is then found by

x= 1

2(2π)−3/2ATΛΓbδ. (2.9) When the filter isΓ(·) =|·|, we have the discrete version of the inversion formula (2.6). In Figure2.11we see an illustration of some standard filters that can be used to replace the| · |filter in Fourier domain thus adding regularisation. The idea is to let the filter go to zero as the frequency increases, thus removing the noise that is represented by high frequencies. In Figure2.12we see reconstructed images using the different filtering methods. We see that the filtering acts as regularisation and dampens the noise corruption in the image. Note that, since the discrete Fourier transform in frequency domain is band limited, the| · |filter also regularises the solution to some extent.

2.2 Inverse problems & regularisation 23

Iteration #

100 101 102 103 104

Error

0.2 0.3 0.4 0.5 0.6 0.7 0.8

0.9 Landweber: Error measures

RE 1 - MI

Figure 2.9: Error measure depending on iteration number for Landweber method in Section 2.2.4.2 on the data in Figure 2.6. The opti-mal stopping iteration for the Landweber method is seen to be around iteration 100 for this problem.

hey..

0.1 0.2 0.3 0.4 0.5 0.6

(a)Iteration 5, RE= 0.66, MI= 0.58.

0 0.2 0.4 0.6 0.8 1

(b) Iteration 100, RE= 0.27, MI= 0.74.

0 0.5 1

(c) Iteration 2000, RE= 0.37, MI= 0.51.

Figure 2.10: Reconstructed images using the Landweber method on the sim-ulated data in Figure2.6with iterations as shown.

(a)Ram-Lak filter. (b)Hamming filter.

(c) Cosine filter. (d)Hann filter.

Figure 2.11: Illustration of different filters for the filtered back projection for-mula in (2.9). The Ram-Lak filter is from the inversion forfor-mula, and the other filters remove high frequent elements in Fourier domain before back projection; a form of regularisation.

-0.2 0 0.2 0.4 0.6 0.8 1 1.2

(a) Ram-Lak filter, RE= 0.35, MI= 0.57.

0 0.2 0.4 0.6 0.8 1

(b)Hamming filter, RE= 0.28, MI= 0.74.

0 0.2 0.4 0.6 0.8 1

(c) Cosine filter, RE= 0.29, MI= 0.73.

0 0.2 0.4 0.6 0.8 1

(d)Hann filter, RE= 0.29, MI= 0.74.

Figure 2.12: Reconstructed images using the discrete filtered back projection formula in (2.9) with the filters from Figure2.11.