Browsing by Subject "Image processing -- Digital techniques"
Now showing 1 - 20 of 29
Results Per Page
Sort Options
Item 3-D modelling and classification in automated target recognition(Texas Tech University, 1990-08) Nutter, BrianAutomated target recognition (ATR) using a computer vision system is a problem of extremely high complexity. A 3-D object recognition scheme involves many image analysis and enhancement techniques, including image processing, image segmentation, image registration, and object modeling and projection. This dissertation addresses the problem of 3-D object recognition using five distinct methods of matching image data with model projection-derived data. In analyzing each digitized video image, a variety of techniques, including an optimal gray level map for correlating binary line drawings with gradient images, were used to enhance the visibility of particular features and to increase signal to noise ratios in images. The shapes extracted from these enhanced images were then analyzed in a number of fashions, including the statistical descriptors of Karhunen-Loeve transformation. The first of the five object identification methods which were tested compared descriptors of the object to be analyzed with those of model projections meeting certain criteria. The second compared the object descriptors to those of a precalculated series of model projections. The third method used the descriptions of the second method as a starting point for a neural network, and then proceeded to learn the differences between these model projections and actual data. The neural net as realized demonstrated a great reduction in training time over conventional implementations. Calibration difficulties of other methods were greatly reduced by the learning capability of the neural net. The fourth method cross-correlated the optimally mapped gradient of the object image with a series of model projections. Finally, ways of combining these methods to utilize the strengths of each were investigated. Superior accuracy was obtained for cross-correlation. Optimal techniques which significantly reduced the number of required correlations and hence the computational load were also found to give very accurate results.Item A CCD image sensor frame grabber and conditioner(Texas Tech University, 1987-12) Mau, Kim-YouNot availableItem A personal computer based fundus image processing system(Texas Tech University, 1987-05) Whiteside, Steven LeroyNot availableItem A study of measurements of blocking defects in highly-compressed images(Texas Tech University, 1997-12) Zhong, Jianqiang NormanNot availableItem A vision system for a small image CCD array(Texas Tech University, 1988-08) Young, MingNot availableItem Adaptive image restoration in signal-dependent noise(Texas Tech University, 1982-08) Kasturi, RangacharAn image distribution is modeled as a non-stationary stochastic process. The presence of signal-dependent noise further renders the noisy observation to be spatially nonstationary. As a consequence, spatially adaptive estimators outperform estimators based on global statistics. A number of spatially adaptive Bayesian estimators are derived using (1) maximum a posteriori probability and (2) minimization of mean square error as the optimality criteria. An estimator that compensates for the low pass filtering effects of adaptive estimators is also obtained, as is a simple nonlinear contrast manipulation technique suitable for images corrupted by signal-dependent noise. In addition to the just mentioned point estimators, multiple parameter estimators using several Markovian image covariance models are derived. Estimators for images degraded by Poisson noise are also obtained. Simple transformations that render the noise signal-independent, followed by the application of classical Wiener filtering techniques to restore the degraded images are investigated. Under low signal-to-noise ratio conditions the additional signal information contained in signal-dependent noise is recovered to obtain an estimate from the degraded image. Extensive computer simulations are carried out to evaluate the performances of the estimators on several images corrupted by different types of signal-dependent noise. In addition to the qualitative comparisons of the restored images, quantitative evaluations using several measures of image quality, some of which are based on simple models of the human visual system, are presented.Item Advanced techniques for digital image processing(Texas Tech University, 1986-05) Tarng, Jaw-horngA new algorithm for enhancing a degraded grey scale image is proposed here. The enhancement algorithm is a locally adaptive Fourier filter which locates and analyzes the Fourier spectral information and then enhances the identifying features. Thus, it can achieve a better enhancement result than conventional homomorphic FFT techniques. By using a short space basis implementation, a large amount of memory space can be saved, consequently the computation speed is greatly improved. The primary objective of this algorithm is to extract linear features from a noisy image. However, the algorithm also can be modified in order to enhance other different kinds of features. The main advantages of this algorithm are: 1. It requires a small amount of computer memory; this makes it easy to implement in small computers. 2. It has fast processing speed. 3. It is powerful in extracting local linear features.Item An optimized vector quantization for color image compression(Texas Tech University, 1998-05) Kompella, Sastry V SImage Data compression using vector quantization (VQ) has received a lot of attention in the recent years because of its optimality in rate distortion and adaptability. A fundamental goal of data compression is to reduce the bit rate for transmission or data storage while maintaining an acceptable fidelity or image quality. The combination of subband coding and vector quantization can provide a powerful method for compressing color images. Most of the existing VQ algorithms however suffer from a number of serious problems like long search process, codebook initialization, getting trapped in local minima, etc. This work investigates the development of an image compression algorithm using a variable block size vector quantization technique for generation of optimal codebook by employing a neuro-fuzzy clustering approach to ensure minimum distortion. Each color image is decomposed into R, G, and B color planes prior to application of wavelet transform and vector quantization to each color plane. Each color plane is preprocessed by performing multiresolution wavelet decomposition. The multiresolution nature of the discrete wavelet transform is utilized to decompose the images into more directionally decorrelated sub-images, which are more suitable for quantization and coding. Vector quantization is performed on each of the subimages at different resolutions and a multiresolution codebook scheme is utilized. This new approach to image compression facilitates generation of an improved globally optimal codebook, and a simpler search scheme. Finally, the codebooks generated from the three encoded color planes are entropy coded for obtaining higher compression at minimum distortion. Each color plane codebook is decoded and the reconstructed color planes are combined to form the final reconstructed image. The reconstructed images are compared with those of other standard compression algorithms, in terms of Mean square error (MSE), and Peak signal-to-noise ratio (PSNR).Item Analog VLSI implementation of a Gabor convolution for real time image processing(Texas Tech University, 1996-05) Moldovan, LaszloNot availableItem Automatic segmentation of vertebrae from digitized x-ray images(Texas Tech University, 2002-12) Zamora-Camarena, GilbertoThe segmentation of vertebrae in x-ray images is of prime importance in the assessment of abnormalities of the spine. Manual segmentation is prone to errors due to inter- and intra-subject variabilities due to the subjective judgement that is employed. The use of computer vision methods is, therefore, an attractive alternative to providing an automatic means for segmenting vertebrae. However, general-purpose algorithms present a number of shortcomings that limit their ability to locate and delineate precise vertebral shapes. Therefore, there is a need for a different approach. This work presents the development of an automatic segmentation methodology that employs a hierarchical approach to segmentation. The unique combination of the Generalized Hough Transform, Active Shape Models, and Deformable Models provides three levels of segmentation firom coarse to fine, respectively. Each algorithm has been customized to address the shortcomings of the other two, thus providing a robust framework. Generalized Hough Transform is used to estimate the pose of the spine within a target image. Then, the technique of Active Shape Models is used to find the boundaries of the vertebrae and to give a global approximation to their shape. Finally, the technique of Deformable Models is used to refine the shape of the vertebrae at key points of interest, such as anterior comers. Experimental results with a data set of 100 lateral views of cervical vertebrae and 100 lateral views of lumbar vertebrae have shown a success rate of 75% in finding boundaries of cervical vertebrae and 50% in lumbar vertebrae. The algorithm developed in this work represents a viable alternative to the currently available segmentation methods in which a unique combination of customized algorithms implements a hierarchical firamework.Item Color image compression using wavelet transform(Texas Tech University, 1997-08) Meadows, Steven CarlC language coding of image compression algorithms can be a difficult and tedious task. Image compression methods are usually composed of many stages of cascaded algorithms. Each algorithm may be developed independently. This thesis will address the problem of interfacing new image compression algorithms with older and established algorithms such as entropy coding and the discrete wavelet transform. The thesis will describe ANSI C coding procedures and functions involved in implementing two entrop\ coding algorithms including Huffman coding and arithmetic coding. Wavelet theory will be discussed as it applies to the discrete wavelet transform. The thesis will also describe an ANSI C coding implementation of one of the newest wavelet coefficient coding techniques, embedded zerotree wavelets (EZW) developed by Jerome Shapiro. The EZW compression performance will be compared with JPEG which is the standard adapted currently for still images by the Joint Photographic Experts Group.Item Depth information from image sequences using two-dimensional cepstrum(Texas Tech University, 1990-05) Lee, Dah-jyeCurrently existing methods for three-dimensional (3-D) reconstruction of an object are computationally intensive and lacking in accuracy. The research work comprising this thesis presents a new motion stereo model that is computationally less demanding and yields more accurate depth information than the existing methods. One of the traditional techniques for extracting depth information is to find the disparities of corresponding points in stereo images following a biological model of 3-D vision. A normal binocular stereo system uses two images to determine which point in one image corresponds to a given point in the other, i.e., to find the disparity between two images. The resolution of the disparity depends on the baseline used. High resolution in disparity is achieved by increasing the baseline and decreasing the window size. Based on this idea, a new motion stereo model using a sequence of a number of images has been developed that can provide accurate depth information that is not available from a stereo vision system. The disparity, i.e., the translational difference between an image pair, has been computed precisely using a recently developed power cepstrum technique that is more robust and noise tolerant than the usual phase correlation technique. The computation time required by the power cepstrum has been further reduced by using a Hartley-like transform that maps a real-valued sequence to a real-valued spectrum while preserving the useful properties of the Fourier transform. This new motion stereo vision model matches the corresponding points in two images with several intermediate images to reduce the error in matching from widely different perspectives and uses a Hartley-like transform to compute the power cepstrum for finding the disparities. The depth information extracted from the disparities of a sequence of images by the cepstrum technique is less computationally intensive yet avoids the occlusion problem in a stereo vision model. This new motion stereo model provides a unique method of range data acquisition and visualization of 3-D data.Item Design and analysis of a true color personal computer based image processing system(Texas Tech University, 1988-05) Bounds, Brian FrankNot availableItem Determination of zero-crossings for stereo image matching(Texas Tech University, 1987-08) Heinrich, Mark LeeThe problem of stereo image matching requires compact/ faithful descriptions of a pair of two-dimensional images to compute the depth to points in a three-dimensional scene. Zero-crossings/ corresponding to edges in the underlying image/ prove to be these "primal" descriptors. The microcomputer poses fundamental constraints which confine implementation of an effective solution to this problem. An algorithm is developed and tested at two levels of microcomputer based image processing systems for the computation of one- and two-dimensional zero-crossing descriptions. Multi-scale analysis provides the key for insight into the validity of a microcomputer approach to the problem.Item Developing computer-generated stereoscopic haptic images(Texas Tech University, 1998-12) Watson, Kirk LNot availableItem Digital Enhancement of Degraded Fingerprints(Texas Tech University, 1985-08) Barsallo, Adonis EmmanuelOnce latent fingerprints have been obtained, there are two major operations to be performed; they are enhancement and classification. In this thesis we strictly focus on digital image enhancement, and for this purpose, several techniques for fingerprint enhancement were studied and developed. These techniques may be implemented in either the spatial or spatial frequency domains. Processing techniques in the spatial frequency domain are based on modifying the two-dimensional Fourier transform of the image. The approaches in the spatial domain are based on direct manipulation of the pixels. Since the contrast of latent fingerprints is space-variant, a spatially adaptive technique, in the spatial domain, was studied and developed further. Such a technique was adaptive binarization, which makes use of moving windows with spatially-varying parameters. A double pass using this algorithm improved the fingerprint appearance even further. Other spatial main methods studied were contrast stretching/sliding and the image complement, which provided a better quality of print for the purpose of ridge/valley discrimination. Spatial-frequency domain methods developed included, linear ideal/Butterworth filters, which were individually tested, and a homomorphic filter process, which made use of generalized linear filters. This later method proved effective in removing multiplicative degradations that had been introduced into the latent fingerprint. A one-dimensional fingerprint diffusion model approach led to the development and application of a Laplacian operator, which more accurately describes such a diffusion process, resulting in a fingerprint image where the edges were sharpened.Item DSP-enhanced vision recognition(Texas Tech University, 1990-12) Sharbutt, Albert CThis thesis addresses the problem of the time involved in performing image recognition or image processing. Digital signal processing chips can be used alone or as additions to existing processors to increase the throughput and versatility of these systems. This thesis does not seek to develop a vision recognition system, but examines several common vision recognition tasks to determine how much improvement could be offered by a digital signal processing chip and how much effort is required to achieve this benefit.Item Filtering and glint artifact processing of speckle corrupted shear beam images with wavelet and morphological filters(Texas Tech University, 2000-05) Wilson, Mark PhillipCoherent reflective imaging inherently contains signal independent and dependent speckle noise. Shear beam imaging is a coherent reflective imaging technique, which inherently contains signal dependent speckle noise with characteristic glints. The removal of the speckle using a minimal number of frames while preserving the glints is the major concem of this thesis. The glints will be preserved by its segmentation from each frame prior to speckle removal. In this thesis, we have experimented with two sets of Shear Beam Images. Each set contains unaveraged coherent snapshots of two satellites, the DMSP and OCEANR. Our objective is to reconstruct a coherent diffraction-limited truth image with a small number of averaged frames. Currently up to 100 frames are needed to construct a diffraction-limited truth image. The individual frames contain speckle noise, which is removed by the averaging process. In the past, several methods have been used to eliminate this noise, descriptions are contained within. In this thesis, we will introduce morphological and wavelet filters to achieve improved resuUs over previous filters.Item High capacity data hiding system using BPCS steganography(Texas Tech University, 2003-12) Srinivasan, YeshwanthNot availableItem Image registration using power spectrum and cepstrum techniques(Texas Tech University, 1987-05) Lee, Dah-jyeThe use of power cepstrum analysis in image registration is explored. Rotational shifts and translational shifts are corrected separately. The technique involves two main ideas. First, after pre-processing to remove redundant information and information which could result in false registration parameters, a rotational shift is changed into a translational shift. Second, power cepstrum analysis is used to correct the translational shift. Because of the introduction of these ideas, this new algorithm can work very fast and accurately compared to conventional correlation techniques. The primary objective of this algorithm is to compare fundus photographs taken at different times, thus making possible early detection of some retinal diseases. However, this algorithm also can be used for other applications, such as analyzing aerial photographs. Numerous pictorial examples illustrating the technique as applied to retinal photographs are included throughout this thesis.