• Ingen resultater fundet

Introduction to visualization

Medical imaging normally involves the handling of very complex models which leads to great demands on memory and computational resources. Data sets are seldom smaller than 30-40 Mbytes. Traditionally medical imaging has been per-formed on large workstations, but recently PCs have become more powerful and are today quite capable of handling large data sets. The recent availability of Java and VRML raises the question of whether such high-level languages can be used for 3D medical imaging on a PC.

For volume visualization two approaches are commonly used; direct volume rendering and extracted iso-surface rendering.

5.4.1 Direct volume rendering

Direct volume rendering allows the 3D volume data to be visualized directly, which has the advantage that all data in the volume may contribute to the final image.

While direct volume rendering can be done easily by ray-tracing, this is very inefficient and slow, as each voxel may be processed several times. A more effi-cient algorithm traverses the volume in data or slice order to reduce redundant data accesses. A popular algorithm for doing this is the shear-warp volume rendering

algorithm [122]. Shear-warp is a two pass algorithm which decomposes the 3D viewing transformation into two simpler transformations, a shear transformation along one of the major axes of the volume, followed by a 2D warp to make the result match a 3D view transformation. A parallel version of the shear-warp algo-rithm is presented in [198]. In [39] a version of shear-warp for surface rendering of sparse volumes is presented.

The Cube-4 [179] volume rendering architecture developed at SUNY - Stony Brook is related to the shear-warp architecture in that it processes one slice at a time. Cube-4L [16] is an extension for perspective projection. EM-cube [171] is an efficient version of Cube-4, which is used in the commercial Mitsubishi VolumePro implementation [180]. VolumePro is able to interactively render 2563 volumes at 30 frames/s. Other examples of direct volume rendering hardware can be found in [128, 190].

During the visit at the Visualization Lab at SUNY-SB some insight in the de-sign of volume visualization systems was gathered. Based on this, a more effi-cient implementation of our shear-warp volume renderer, originally implemented by [109] and [131], was integrated into the VRML scene graph engine of Hy-bris. This improved integrated volume renderer, while still primitive compared to CUBE-4, now provides interactivedirect volume rendering in software, suitable for use in the 3D-Med medical visualization workstation described later in this chapter.

As the volume renderer is integrated in the Hybris VRML engine it is also pos-sible to display combined surface and volume renderings, although the integration is not yet optimal, as surface models are currently always rendered on top of the volume models. A very similar example of how surface and volume rendering can be combined by using VRML was published this year in [12].

Our VRML Volume rendering extension is defined as this extended VRML node prototype, which must be used in the geometrynode placeholder field of theShapenode, as it is subclassed from the abstractGeometrynode:

Volume {

exposedField MFString url []

exposedField SFBool intermediate FALSE exposedField SFBool perspective TRUE

exposedField SFInt32 slice -1 #[-1, slices-1]

}

The new Volume VRML node works exactly as any other geometry node such as IndexedFaceSet, and can be located anywhere in an object hierar-chy. The url field is a URL1 specifying where to find the volume data file, the

1URL: Universal Resource Locator, similar to an internet WWW link.

intermediatefield specifies if the intermediate (sheared) image should be dis-played rather than the final (sheared and warped) rendered image,perspective indicates whether we want a parallel- or perspective-projection rendering, and fi-nallysliceallows rendering of the complete volume (-1) or just one slice. The size and voxel dimensions of the volume is determined automatically from the in-put volume data file.

True integration of surface and volume rendering requires a hybrid volume and polygon graphics architecture capable of sorting the polygons relative to the slices in the volume in order to handle transparency correctly. This is actually a generalization of a fragment sorting graphics architecture which is required for proper handling of transparency, as discussed earlier in this thesis. A possible extension of the CUBE volume rendering hardware architecture to allow hybrid volume and polygon rendering with slice-level transparency sorting is published in [120].

Finally, general purpose 3D graphics hardware with an efficient implemen-tation of texture mapping is increasingly becoming usable for real-time volume rendering. 3D texture-mapping can be used to render a volume by mapping a volumetric texture to a set of polygons. Note that 3D texture mapping must be implemented in hardware, which is currently only available in high-end graphics workstations such as the SGI InfiniteReality. The pixel blending operations in the texture mapping hardware can be used to implement shading and classification of the volume, by encoding the voxel gradient vectors as the RGB components and the intensity as the Alpha component in the 3D texture map. This type of volume rendering based on 3D texture mapping is presented in [67, 232, 146]. Gradient calculations must be done as a pre-processing step as the texture mapping hardware is unable to do this.

Recent advances in graphics hardware for single pass multi-texturing allows volume rendering using 2D texture mapping at high speed on standard PC-based 3D graphics processors, using an approach presented in [191]. Multi-texturing is used to dynamically interpolate between volume slices, improving the quality of the rendering. Additionally, as recent PC graphics processors such as the Nvidia GeForce has introduced per-pixel dot-product operations, this can be used to im-plement dynamic lighting for shaded volume rendering. Since the 2D texture map based algorithm in [191] uses a slice oriented memory layout similar to the shear-warp algorithm and CUBE, it can be implemented more efficiently than 3D texture mapping based volume renderers.

5.4.2 Surface model extraction

To visualize volume data it is also possible to extract a triangle mesh model repre-senting an iso-surface in the volume data. This triangle mesh model is suitable for rendering with an implementation of the Hybris graphics architecture. A suitable algorithm for iso-surface extraction is the Marching Cubes algorithm [129].

A discretized version of the marching cubes algorithm, similar to the one pre-sented in [157], was developed by [160] and is currently used in the 3D-Med work-station to extract iso-surfaces. It works by matching eight volume values (a cube) to a finite set of triangle configurations which is then adjusted to best match the location of the iso-surface, using a discretized rather than continuous set of vertex locations to speed the process. By repeating this process (marching) over the en-tire volume, a triangle mesh is constructed which represents an iso-surface in the volume data set.