|dc.description.abstract||Computer graphics encompasses the process of displaying an object stored in the computer as a mathematical or geometric description on a computer screen. Volume graphics is a sub-field of computer graphics that deals with extracting and visualizing meaningful information from volumetric data. Volumetric data consists of a set of points that defines some properties of the three-dimensional space they occupy. Using current techniques, such data can be considered as a computer graphics object where each point of data is represented in the computer as a voxel. Each voxel allows the volume to be rendered in three-dimensions when these volumes previously could have been displayed only as iso-contours in cross-sectional planes.
Volume data contains more information than mathematical descriptions of surfaces because external and internal representations are both provided. However, rendering high quality volume graphics is an arduous task that requires massive amounts of computation for calculating voxel contributions to a final image. This process is further complicated and poses unique challenges for developers when attempting to achieve interactive frame rates. Recently, researchers have turned to consumer graphics hardware and texture-based techniques to relieve software from the burden of such processing. Several problems, such as limited texture memory and visual artifacts, occur when dealing with consumer graphics hardware. Furthermore, modification of voxels stored in texture memory during run-time requires additional overhead and preprocessing. These problems range in difficulty according to the different rendering techniques implemented.
The general problem this thesis addresses is the development of two texture-based volume-rendering techniques to provide real-time interactive volume visualization within the context of a PC-based anatomical training system. The anatomical training system that I developed, which applies these two techniques, is called V-VBS. V-VBS utilizes the Visible Human (VH) datasets and two texture-based rendering techniques called object-aligned and view-aligned slices to create interactive volumetric virtual body structures (V-VBS) on personal computers while still providing interactive frame rates for large full-color volumes. Typical anatomical training systems use special hardware, expensive workstations, or pre-rendered images to provide users with an interactive environment. V-VBS allows the choice of rendering techniques based on graphics card capabilities and provides a means to reduce the resolution of volumes to achieve interactive volume rendering for large volumes.
V-VBS utilizes the visible human male dataset to provide realistic colorization and organ identification through a ray to bounding box collision detection algorithm. VVBS uses consumer graphics hardware features to allow "on the fly" changes to the volume. These changes occur to groups of voxels in real-time while the computer is rendering the volume. However, some consumer graphics cards do not support features that allow real-time changes to voxels. Therefore, new databases were created based on the original dataset allowing users to make changes to the volume at near interactive rates or to change the visibility of structures before the volume is rendered. To assist in changing voxels at near interactive rates, a bounding box database was created based on the VH segmentation database. In a worse case scenario, where a computer does not have the capabilities to change voxels "on the fly," a user can change the visibility of structures before a volume is rendered. For such a scenario, an algorithm was created to quickly search the segmented data to display a present structure list from a user specified volume of interest (VOI). Additionally, a connected structures database was created allowing a user to view a list of surrounding structures to further aid in choosing the visibility of structures before a volume is rendered.
Testing of the V-VBS system was conducted on four personal computers with nine volumes. Each computer had different resources and the dimensions of each volume were different. One computer had a Pentium 4 1.8 Ghz processor, 1GB of RAM, and a GeForce3 graphics card. The second computer had a Pentium 3 933Mhz processor, 512MB of RAM, and a GeForce2 graphics card. Using the first computer a volume containing the liver (512x512x256, about 67 M voxels) can be created and manipulated, with most interactions taking a few seconds. Computationally intense rotation is achieved at 4fps using the view-aligned method. Using both methods of volume rendering discussed in this thesis a typical volume with 16.7 M voxels (256x256x256 dimensions) runs at 27fps using the object-aligned technique and I2fps using the view-aligned technique. On the second computer such a volume cannot run at interactive frame rates at full resolution, but by reducing the resolution 11 fps is achieved.||