Browsing by Subject "Image reconstruction"
Now showing 1 - 10 of 10
Results Per Page
Sort Options
Item Edge detection in noisy images using directional diffusion with log filters(Texas Tech University, 1996-08) Liu, ChangImage noise reduction and edge detection have been important image processing techniques. Traditional isotropic image smoothing reduces noise at the cost of image blurring. Anisotropic smoothing tries to maintain the image features, while reducing noise. This thesis presents an anisotropic smoothing implementation that only uses a 3-by-3 window, and therefore is easy to implement in hardware. Edge detection is also studied in this thesis. A pre-processing technique is proposed to do position dependent brightness correction. This technique makes edge thresholding easier. We also present an algorithm that implements Gaussian filtering more accurately than Gauss-Hermite integration at the cost of an insignificant increase in computing complexity.Item Image coding with multiresolution morphological pyramid and vector quantization(Texas Tech University, 1995-12) Zhang, ZhiyangIt has been proven that the morphological pyramid is a useful tool in image compression due to low computational complexity, simple implementation and good compression performance based on minimization of entropy of the error pyramid [7,9]. Several morphology based pyramid decomposition techniques already exist [6-11,21,22,24]. These techniques use morphological filters prior to the down sampling of images. The coding schemes developed commonly omit the first error image of the error pyramid to achieve high compression ratios. However, fine image details may be lost in this process. In order to get a high quahty lossy image, an estimator for the first error image has been developed in [22]. By using this estimator, the bits per pixel required to code the first error image can be reduced by 30 to 40 percent to obtain "near lossless" compression.Item Item Image recovery and segmentation using competitive learning in a neighborhood system(Texas Tech University, 2002-12) Li, ChengchengImage restoration and segmentation are important image processing techniques. In recent years, many researchers in the image restoration field have based their research methods on calculus of variation and mathematical statistics and do not directly incorporate the observed principles of a low-level animal vision. In a previous work, based on the principle of low-level mammalian visual system that deals with image restoration and segmentation problems from a more direct and easily understandable and acceptable aspect, a new algorithm incorporating competitive leaming method was developed. This algorithm yields improved performance over previous studies in synthetic image restoration. This paper furthers the development and application of this algorithm. This paper has purpose that is threefold. First, this paper presents results for reconstruction and estimation of uncorrupted images from a distorted or a noisy image by using competitive leaming method. This paper evaluates the CLRS (Competitive Leaming in image Restoration and Segmentation) method by experimenting with this algorithm on a variety of images and a wide range of parameters, both based on practices and theories. The meaning and value range of some parameters are discussed in detail. Second, we enlarged the size of the neighborhood used in CLRS to see the influence of neighborhood range. Third, we reviewed the current methods both in image restoration and edge detection, then we compared the restoration and segmentation results obtained from CLRS and all the other methods. The results showed that CLRS algorithm performances were consistently better or equal in edged preservation and comparable performance in enhancing within the boundaries. These results are based on simulation experiments on a set of synthetic and real images corrupted by Gaussian noise. We concluded that an interactive algorithm for image reconstruction and segmentation, CLRS, has been developed. This algorithm is based on the principle of competitive leaming.Item Image recovery and segmentation using the fractal dimension(Texas Tech University, 1998-12) Pallemoni, Sharath CIt has been observed that real world objects are inherently composed of complex, rough and jumbled surfaces while current representational schemes use generalized cylinders or splines to describe natural surfaces. Therefore, there is a need to have a model for describing all naturally occurring surfaces. Alex Pentland [8] has shown that the fractal dimension is a representation, capable of succinctly describing the surfaces of natural objects, such as mountains, trees, clouds etc. His paper describes a method of computing the fractal dimension. In this work, Alex Pentland's [8] algorithm for evaluating the fractal dimension was applied to a set of images. The results of this algorithm when studied, reveal a distinct distribution of smooth edges such as the texture of the shirt in Figure 2, the facial variations in the same image, the surface variations of the road in the Figure 3 and the varied distribution of regions with people and houses in Figure 4. In addition to obtaining Sun Raster images for the fractal dimension using Pentland's [8] algorithm, this work also included evaluating the hard edges present in the images using the Sobel operator edge-detection scheme. On applying the Sobel operator method to Figure 3, the resultant image distinctly indicated the hard-edges in the original image by clearing bringing out the outlines of different people and the outlines of other objects present in the image. The Sobel edge-detected image had all the sharp edges represented by sudden variations of homogeneous regions in the original image (Figure 3). Finally, the results obtained using the fractal dimension and those obtained using the Sobel operator were logically OR-ed to capture both hard and soft edges. The test images used for this logical OR operation were the results of the fractal dimension algorithm and the Sobel operator edge-detection technique on Figure 3. As can be seen from the final result (Figure 49), both the soft edge distribution of the fractal dimension image (Figure 47) and the hard edges found in the Sobel operator image (Figure 48) were successfully captured in the logically OR-ed image. The final results of this work were thus able to represent up to a degree, a previously non-existent model which attempted to integrate the results of Alex Pentland's [8] work using the fractal dimension for evaluating soft edges and results obtained from the Sobel operator for hard edge detection. This work indicates mixed results in coming up with the new model suggested and definitely has potential for improvement.Item Pyramidal stereo matching and optimal surface recovery for 3-D visualization(Texas Tech University, 2002-08) Corona, EnriqueThree-dimensional surface recovery based on a pair of stereoscopic images is a very well-known ill-posed problem with solutions depending mainly on the correct measures of the shifts between corresponding points (disparities) in the images acquired by a known imaging system. Noise, occlusions, and distortion present in the pair of images make the task of finding precise disparities difficult and very time consuming. This work presents a three-dimensional surface restoration method based on the recovery of the optimum surface within a 3-D cross-correlation coefficient volume via a two-stage dynamic programming technique. This procedure is applied to a set of optic nerve head (ONH) images, which are used for finding clinical measures of progression of glaucoma. Registration of these types of images is performed through a two-step coarse-to- fine procedure using power cepstrum and cross-correlation operations, while a local registration based on the weighted mean of second-degree polynomials is used for image fitting. Variations in topography of the ONH can be measured through cup-to-disc ratios which are computed from the 3-D surface generated from longitudinal stereo disc photographs of glaucoma patients spanning several years. These computer-generated measures of cup-to-disc volume ratios correlate well with the traditional stereo cup-to-disc ratios manually computed from clinical interpretations. Such algorithmic approach to semi-automated computation of cup-to-disc volume ratios may potentially provide a more precise and repeatable measure of progression of glaucoma than the existing clinical measures. Moreover, the 3-D surface recovery technique developed in this thesis may provide a general technique for visualizing 3-D objects in a natural scene.Item Quantitative PAT with unknown ultrasound speed : uncertainty characterization and reconstruction methods(2015-05) Vallélian, Sarah Catherine; Ren, Kui; Ghattas, Omar; Müller, Peter; Tsai, Yen-Hsi; Ward, RachelQuantitative photoacoustic tomography (QPAT) is a hybrid medical imaging modality that combines high-resolution ultrasound tomography with high-contrast optical tomography. The objective of QPAT is to recover certain optical properties of heterogeneous media from measured ultrasound signals, generated by the photoacoustic effect, on the surfaces of the media. Mathematically, QPAT is an inverse problem where we intend to reconstruct physical parameters in a set of partial differential equations from partial knowledge of the solution of the equations. A rather complete mathematical theory for the QPAT inverse problem has been developed in the literature for the case where the speed of ultrasound inside the underlying medium is known. In practice, however, the ultrasound speed is usually not exactly known for the medium to be imaged. Using an approximated ultrasound speed in the reconstructions often yields images which contain severe artifacts. There is little study as yet to systematically investigate this issue of unknown ultrasound speed in QPAT reconstructions. The objective of this dissertation is exactly to investigate this important issue of QPAT with unknown ultrasound speed. The first part of this dissertation addresses the question of how an incorrect ultrasound speed affects the quality of the reconstructed images in QPAT. We prove stability estimates in certain settings which bound the error in the reconstructions by the uncertainty in the ultrasound speed. We also study the problem numerically by adopting a statistical framework and applying tools in uncertainty quantification to systematically characterize artifacts arising from the parameter mismatch. In the second part of this dissertation, we propose an alternative reconstruction algorithm for QPAT which does not assume knowledge of the ultrasound speed map a priori, but rather reconstructs it alongside the original optical parameters of interest using data from multiple illumination sources. We explain the advantage of this simultaneous reconstruction approach compared to the usual two-step approach to QPAT and demonstrate numerically the feasibility of our algorithm.Item Restoration and segmentation of digital images by adaptive filtering(Texas Tech University, 2000-05) Castellanos, RamiroSegmentation of degraded images has always been a difficult problem to solve. In coherent image acquisition systems, occurrence of speckle noise is a common phenomenon that is hard to remove without further degrading the image. In this dissertation, a new method for image segmentation based on the Adaptive Fuzzy Leader Clustering Algorithm (AFLC) is introduced. AFLC is a hybrid neuro-fuzzy model developed by integrating a Learning Vector Quantization (LVQ) network with fuzzy memberships. This integration provides a powerful yet fast method for recognizing embedded data structure and it has shown superior misclassification rates over similar segmentation approaches, Neuro-fuzzy clustering algorithms can achieve efficient object extraction from noisy images since noise pixels can be identified during the clustering process and separated from the rest of the image. When dealing with corrupted images, the first step prior to segmentation is always the enhancement of features in the image by filtering in either the spatial or frequency domain. In this dissertation, a new non-linear adaptive filter based on AFLC is developed. This new adaptive filtering method has been specifically tailored to reduce the degradation introduced by speckle noise in coherent imagery like synthetic aperture radar (SAR) or ultrasound imaging. The results achieved by this process have been compared with the results from the traditional median filter, the Kuan filter, and the connectivitypreserving morphological filter demonstrating the superior performance of AFLC in removing speckle noise. We have also compared AFLC to other classification algorithms, such as those derived from the statistical decision theory, and to many well-known fuzzy, neural, and neuro-fuzzy unsupervised algorithms. The concept of local cluster validity introduced in AFLC is addressed and comparison results are presented for well-known global validity indices. Finally, the convergence criteria of the AFLC algorithm have been analyzed under the framework of stochastic approximation and results that ensure the stability of its performance are presented.Item Superresolution of real image sequence(Texas Tech University, 2004-12) Feng, ZhanpengImage superresolution has attracted substantial attention in the image processing community in recent years. Valuable techniques have been developed, and practical results have been obtained. However, in much of the literature, successes are frequently demonstrated in synthetic simulations, which limit a technique's practical use. This thesis will develop a technique to superresolve a real image sequence. This technique consists of three portions: system blur and noise removal, image registration, and sequence combination. First, the system blur and noise removal is achieved by a new approach of Point Spread Function (PSF) estimation. This approach is easy, cost-effective, and accurate compared to traditional methods. Then, image registration is performed, based on inserted fiducials. Translational shifts, rotation, scaling, and geometric distortions can be handled by this method. Finally, three different framecombining algorithms are implemented and compared. These techniques are demonstrated on an image sequence taken by a Canon EOS D30 digital camera. Quarter pixel superresolved images with sharper edges are obtained. The results confirm the effectiveness of these techniques. Analyses are done in terms of performance and implementation complexity.Item The effect of nonlinearity in robotic vision systems(Texas Tech University, 2000-08) Chanda, RupenIn this thesis, an application of generic approach to nonlinear image processing is described which is implemented in real-life application. Consideration is given to the general geometrical resampling process in which output pixels are estimated by interpolation of input pixels. However, here the geometrical resampling process is not at regular time interval. This generic approach can be used to solve several kinds of nonlinear image restoration problems when the image is being grabbed by a line scan camera. With a little modification, this algorithm can be easily implemented in hardware, so that image can be corrected during the grabbing period.