**5. Reconstruction accuracy of tube-like objects**

**5.1 Reconstruction problem validation**

Before experimenting with real data, it is desirable to see if SFM methods are indeed appropriate for the given problem: reconstruction of a cylinder from a sequence of images taken from different positions of a camera pointing inside of the cylinder. At this step no quantitative analysis will be made, only the visual quality of the reconstruction will be inspected.

The experiment is performed using pure synthetic data. A number of 28 points are distributed over the surface of a cylinder with a radius of 40 units. The points form three rings at a distance of 20 units away of each other. The cylinder axis coincides with the z axis of the coordinate system. Two camera configurations are defined as in Figure 5.1. A camera is fully defined by its position in 3D space and its orientation. The direction along witch a camera is pointing is colored in red. In the first configuration (Figure 5.1 a), five cameras are placed along the cylinder axis. The first camera (in the bottom) is at a distance of 90 units away of the first ring of points. The other four cameras’

positions and orientations are obtained by translating the previous camera with 10 units along the z axis.

In the second configuration (Figure 5.1 b) the first camera is placed at the origin of coordinate system, and for the other four cameras the positions and orientations are randomly generated.

a) b)

*Figure 5.1 Two camera configurations used to test reconstruction of a cylinder *
*using SFM methods *

### Reconstruction accuracy of tube-like objects 71

In the case of the first configuration, the camera motion is a pure translation along optical axis, while the second configuration corresponds to a general motion. The cameras are considerate to be calibrated. The reason of choosing this configuration for the cameras is that recovering the 3D model when camera motion is pure translation is ill posed for many of projective SFM methods. Of course, the factorization method used in these experiments recovers the Euclidean structure, but it is interesting to see how it behaves in this case.

The 3D points located over the surface of cylinder are projected on the frames of each of five cameras. The projection corresponds to image formation process in a real experiment, and projected points correspond to feature points. In real images feature detection may not be very accurate due to factors as image noise, or algorithm itself. To make our experiment more realistic, Gaussian noise with

### σ

=0.005is added to the projected points in order to simulate the imprecision of feature detection step. The projected values corresponding to x axis of image frame range between (-0.4767, 0.6084), and corresponding to y axis range between (-0.5698, 0.5326). That means the Gaussian noise added to the projected points corresponds on average to a 0.45% localization error on both*x and y*axis. For example, in the case of a 512x512 pixels image, the localization error of features is 2.3 pixels on average.

The coordinates of noisy projected points are passed to the SFM algorithm and processed in the two steps. In the first step an initial Euclidean reconstruction (and also camera positions and orientations) is obtained with the factorization algorithm. It is already known that factorization method is not optimal due to the linearization of camera model. In the second step the recovered structure (and also cameras) is refined by a bundle adjustment process.

The results obtained for the two considered configurations are listed in Figure
*5.2 and Figure 5.3. A simple visual inspection of these results is more than *
enough to point out a few conclusions. Both experiments produced very similar
results so we cannot conclude that a configuration behaved worse or better than
the other. The reconstruction obtained with the factorization algorithm is quite
bad qualitatively. While the top views show us that x, and y coordinates are
estimated correctly (points follow the contour of the circle), the side views
show us that the factorization method has a deficiency in the estimation of the
depth information. In both cases the optimization performed by the bundle
adjustment step corrected the errors and the recovered structure corresponds to
the real one.

*Figure 5.2. The recovered structure for the configuration of the cameras shown *
*in Figure 5.1 a). Left column corresponds to the structure recovered after *
*factorization method, the right column after bundle adjustment. Middle row is *
*side view, while bottom row is top view of the recovered structures *

### Reconstruction accuracy of tube-like objects 73

*Figure 5.3. The recovered structure for the configuration of the cameras shown *
*in Figure 5.1 b). Left column corresponds to the structure recovered after *
*factorization method, the right column after bundle adjustment. Middle row is *
*side view, while bottom row is top view of the recovered structures *

*Figure 5.4. Side and top view of the reconstructed structure for the *
*configuration in Figure 5.1 b) in the absence of noise *

It is very clear from the side views that the recovered points form three groups corresponding to the three rings of the cylinder. The deviations reflect the Gaussian noise added to projections.

*Figure 5.4 shows the side views and the top views of the reconstructed points *
in the case of second configuration in the absence of noise, before and after
bundle adjustment. The reconstruction was almost perfect even after the
factorization step; the small errors visible in the middle ring should be
associated more with computational errors than with other causes. In this case
the bundle adjustment couldn’t improve the result, as it was already optimal
after factorization step.