• Ingen resultater fundet

Constraining the Deformation

In document -OA 6H=?EC (Sider 95-101)

13.4 Deformable Template Matching

13.4.2 Constraining the Deformation

Although a deformable template model is capable of catching changes in the pupil shape, there are also some major drawbacks. Corneal reections, caused by illumination, may confuse the algorithm and cause it to deform unnaturally. In the worst case, the shape may grow or shrink until the algorithm collapses.

We propose to constrain the deformation of the model in the optimization step by adding a regularization term. Assume the parameters dening an


50 100 150 200

20 40 60 80 100 120 140

−6 −4 −2 0 2 4 6




−10 0 10


Cost Function

Cx Cy λ1 λ2

−3 −2 −1 0 1 2 3








h [rad]

Cost Function


Figure 13.10: (top) A point is initialized near the iris. Alternately, one parameter is varied while the four other parameters are kept xed. The cost function is evaluated. (Bot-tom left) Variation of center coordinates and shape parameters. (Bot(Bot-tom right) Variation of the orientation.

13.5. SUMMARY 97 ellipse is normally distributed with mean µ and covariance Σ. The prior distribution of these parameters are then dened,

p(x) =N(µ,Σ)exp µ



, (13.16)

where the normalization factor has been omitted. The mean and covari-ance are estimated in a training sequence. At last the optimization of the deformable template matching method is constrained by adding a regular-ization term,

E = Av(P)Av(B) +K(1−p(x)), (13.17) where Kis the gain of the regularization term.

The relevance of constraining the deformation is visualized in gure 13.11.

A suitable starting pointxis chosen. The pose and orientation are kept xed, while the shape parameters are varied. In this case the true shape parameters λ1 and λ2 are approximately eight. The image condence as a function of the shape parameters is depicted to the left, while the prior distribution is seen in the middle of gure 13.11. Combining the image condence with a prior according to (13.17) yields the constrained estimate, which is depicted to the right in gure 13.11.

By use of the shape constraints, we incorporate prior knowledge to the solution. The robustness is increased considerably and the parameters are constrained to avoid the algorithm to break down due to innite increase or decrease of parameters.

The deformable template matching method is seen applied with and with-out constraints in gure 13.12. The constrained estimate is seen to be less sensitive to noise due to reections.

13.5 Summary

A mixture of segmentation-based eye trackers have been presented. Starting by the simple - yet ecient - double thresholding, where two threshold val-ues are chosen to nd the dark pixels corresponding to the pupil. The low threshold can be interpreted as a lter regarding the high threshold.

Prior knowledge about the shape and appearance is utilized by the tem-plate matching models. A model concerning gray level intensities and a model regarding RGB color images have be described.

A common characteristic of the above mentioned trackers is that they do not incorporate knowledge from last frame. They are based on explicit feature detection using global information.



λ 2

Cost Function

4 6 8 10 12

4 6 8 10 12


λ 2


4 6 8 10 12

4 6 8 10 12


λ 2


4 6 8 10 12

4 6 8 10 12

Figure 13.11: Given an appropriate starting point x. The pose and orientation are kept xed, while the shape parameters are varied. Note that the surface plots are not -as expected - smooth. This is due to rounding in the interpolation when evaluating the image evidence of the deformable template. (Left) The image condence given the state -warmer colors means more likely. (Middle) The prior probability is a normal distribution with a given mean valueµand covarianceΣ. (Right) Combining the image evidence and prior according to (13.17) yields the constrained estimate.

The appearance of the eye changes together with the gaze direction, face and camera pose. This is exploited by a deformable template matching model. The starting point of current frame can be chosen as the estimate from previous frame. However, rapid eye movements may confuse the tracker since the starting point is too far from the true state. This can be omitted by applying the Double Threshold method.

13.5. SUMMARY 99

Figure 13.12: The deformable template matching method applied without constraints is seen in green, while the red ellipse depicts the constrained version. The constrained estimate is seen to be less sensitive to noise due to reections.



Chapter 14

Bayesian Eye Tracking

The segmentation based trackers from chapter 13 are based on explicit fea-ture detection using global information. The iris is circular and characterized by a large contrast to the sclera. Therefore, it seems obvious to use a con-tour based tracker. The chosen active concon-tour method does not use features explicitly but maximizes feature values underlying the contour in a Bayes sense. Bayesian methods provide a general framework for dynamic state es-timation problems. The Bayesian approach is to construct the probability density function of the state based on all the available information. For that reason the method is relative robust to environmental changes.

Moreover, the changes in iris position are very fast. As a consequence, the iris position cannot be assumed to follow a smooth and completely predictable model. Particle ltering is therefore suitable.

14.1 Active Contours

Witzner et al.[36] describes an algorithm for tracking using active contours and particle ltering. A generative model is formulated which combines a dynamic model of state propagation and an observation model relating the contours with the image data. The current state is then found recursively by taking the sample mean of the estimated posterior probability.

The proposed method in this chapter is based on [36], but extended with constraints and robust statistics.

In document -OA 6H=?EC (Sider 95-101)