Browsing by Subject "Image processing"
Now showing 1 - 20 of 48
Results Per Page
Sort Options
Item A high-speed system for three-dimensional X-ray and neutron computed tomography(Texas Tech University, 1999-05) Davis, Anthony WayneComputed tomography for nondestmctive evaluation applications has been limited by system cost, resolution, and time requirements for three-dimensional data sets. FlashCT (Flat panel Amorphous Silicon High resolution Computed Tomography) is a system developed at Los Alamos National Laboratory to address these three problems. Developed around a flat panel amorphous silicon detector array, FlashCT is suitable for high energy x-ray and neutron computed tomography at 127 micron resolution. For objects smaller than 8 inches in any dimension, the system is capable of generating 360 views to create a high resolution three-dimensional tomographic dataset in less than 40 minutes, many times faster than with conventional linear detector array systems. Overall system size is small, allowing rapid transportation to a variety of radiographic sources. During system development, issues including integration time adjustment, exposure monitoring, and detector flaw correction were addressed to provide high quality output images suitable for later reconstmction into two- or three-dimensional density maps. System control software was developed in Lab VIEW for Windows NT to allow multithreading of data acquisition, data correction, and staging motor control. The system control software simplifies data collection and allows fiilly automated control of the data acquisition process, leading toward remote or unattended operation. The custom data processing software provides a simple graphical interface to control the calibration, filtering, and reconstmction of the acquired data.Item A study of noise effects in phase reconstruction from phase differences(Texas Tech University, 1996-12) Fox, James LNot availableItem A study of statistical image classification and enhancement(Texas Tech University, 1984-08) Tzeng, Mien-hueiNot availableItem A study of very low bit rate video coding(Texas Tech University, 1997-08) Wang, NaxinThis work focuses on the low bit rate video coding used in video conferencing system. A simplified segmentation based coding approach is exponed to H 263 and MPEG-1 standards. This approach uses the motion information to separate the moving objects from the rest and segments the moving macroblocks into objects Based on the features of video conferencing sequence, the encoder is optimized so that more bits are spent on theses moving objects and the others are treated as background Therefore, it saves a lot of bits. The encoder can select the objects by their sizes or let the viewer select them. The quality of the output is fine by our observation.Item Adaptive clustering for image segmentation(Texas Tech University, 1998-12) Neeruganti, JagadeeshThe purpose of image segmentation is to separate different objects embedded in an image. Many image segmentation techniques are available in the literature. Some of the simple techniques employ thresholding based on the gray level histogram, while a number of other sophisticated techniques have been developed in recent years. Among the recent techniques, limited success has been achieved by employing some fuzzy selfsupervised neural networks for object extraction. This work reviews the basic segmentation techniques and demonstrates the applications of adaptive clustering techniques, which make use of neural networks and fuzzy methods for image segmentation. The adaptive clustering techniques used are two neuro-fuzzy techniques namely, the Integrated Adaptive Fuzzy Clustering (lAFC) and Adaptive Fuzzy Leader Clustering (AFLC). The performances of these techniques are compared with the performance of the fuzzy c-means (FCM algorithm as applied to image segmentation.Item Adaptive clustering for segmentation and classification(Texas Tech University, 2000-05) Nagarajan, KavithaThe purpose of image segmentation is to extract different objects embedded in an image. Clustering algorithms have been used for image segmentation for many years now. In recent years, many sophisticated image segmentation techniques have been developed; some of them being those employing fuzzy self- supervised neural networks based clustering algorithms. This work reviews neuro- fuzzy clustering techniques, basic image segmentation techniques and demonstrates the application of the adaptive clustering technique in segmentation. The Integrated Adaptive Fuzzy Clustering (lAFC) algorithm, which combines neural networks with fuzzy optimization constraints, is discussed in detail. The application of lAFC in image segmentation is explored.Item Adaptive wavelet filter design for digital signal processing systems(Texas Tech University, 2000-12) Kustov, Vadim MichailovichDiscrete wavelet transform has been used in many image/signal processing applications in recent years. However, the design of optimized and adaptive wavelet filter banks is still a significant research topic, specifically in image/signal compression. A number of wavelet-based advanced lossy compression algorithms provide high-fidelity reconstmction of input images at computationally intensive costs. The present work investigates the potential and the limitations of optimized adaptive design of two-channel perfect reconstmction filters when the signal in a channel is subjected to coarse quantization during the encoding process of such advanced compression algorithms. A real-time optimal two-channel perfect reconstmction filter bank design algorithm has been developed and implemented in a digital signal processor. The algorithm has been used in a newly developed execution time reduction method to reduce the computational costs and data storage requirement of image compression algorithms. A reduction of execution time by two to three times has been achieved without adding appreciable distortion to the reconstmcted image.Item An adaptive vector quantization technique with a fuzzy distortion measure for efficient image coding(Texas Tech University, 1996-08) Pemmaraju, Suryalakshmi V.Digital image compression techniques are currently experiencing significant growth due to diverse applications demanding efficient storage and transmission of increasing image data contents. These compression techniques involve representation of an image with reduced number of bits pdr pixel by exploiting the redundancy in data present within an image. In lossy^ compression, information theory predicts that the performance of vector quantization (VQ) is superior to that obtained using scalar quantization (SQ) in optimizing the rate distortion fimction. In practice, however, the existing VQ algorithms suffer from a number of serious problems, e.g., long search process, codebook initialization and getting trapped in local minima. This research develops an adaptive vector quantization technique for generating an optimal codebook by employing a neurofuzzy clustering approach to ensure minimum distortion. In addition, a multiresolution decomposition of an image is used as a preprocessing stage for transforming the image into a form that is more suitable for quantization, coding and progressive transmission. The multiresolution wavelet decomposition of an image is performed using Daubechies coefficients prior to vector quantization and a multiresolution codebook scheme is used for quantizing the sub-images at different resolutions. This integrated approach of adaptive vector quantization with wavelet based pyramid image decomposition aids in significant facilitation of the compression and coding processes thereby allowing higher compression ratios with acceptable visual quality. Experimental results of this new approach show significant improvement in performance as compared to a variable block size vector quantization (VBQ). The superior performance of this integrated algorithm has been validated by applying it to several classes of images and comparing the performance in terms of mean-squared error (MSB), peak-signal-to-noise-ratio (PSNR), bit rates and visual fidelity.Item An analysis of radiometric correction effects on Landsat thematic mapper imagery(1991-05) Waits, David Allan; Fish, Ernest B.; Wanjura, Donald F.; Wester, David B.; Davidson, Claud M.; Templer, Otis W.Land-use classifications and spectral indices are commonly created from raw radiance satellite data. These data are known to be distorted due to sensor instrumentation errors and atmospheric contributions. The overall objective of this study was to evaluate different radiometric corrections of Thematic Mapper (TM) data on land-use classification results and the derivation of spectral indices. A Landsat-4 TM digital image of a diverse agricultural area in the High Plains region of eastern New Mexico was the primary data source. Ancillary data incorporated into the study included: extensive field verification data for a study area of approximately 1,820 square kilometers; ground-based radiometer derived spectral response data for commonly grown agricultural crops; and meteorological data used as input parameters for atmospheric modeling using the Lowtran-7 atmospheric correction program. Four different geometrically corrected image data sets were analyzed. The first was raw radiance data in radiometrically uncorrected form. The other three images were radiometrically corrected transforms created using procedures that adjusted the raw data for radiometric calibration and atmospheric correction. All four images were classified in terms of land-use using identical training fields. Supervised classifications were developed using ground truth data, and quantitative analyses were performed on all resulting classifications. Ground-based spectral response data for various land-use types were compared qualitatively to response data derived from the raw and radiometrically corrected image data for the same land-use types. Four spectral index models were applied to each of the four image data sets. The derived spectral indices were transforms that emphasized the quantitative differences among image data sets. The results showed no material differences in classification accuracy among the four image data sets. Thus, it does not appear necessary to perform radiometric corrections on raw radiance data to improve classification accuracy. Spectra derived from atmospherically corrected image data sets more closely approximated "true" spectral response patterns as obtained by a ground-based radiometer. Each of the various components of the radiometric correction process was found to contribute significantly to the derivation of spectral index values.Item Calibration and three-dimensional reconstruction using epipolar constraints on a structured light computer vision system(Texas Tech University, 1997-05) Lin, ChangxingA new structured light computer vision system was developed to determine 3 dimensional geometry information of objects. The system was composed of a dot matrix pattern laser projector, and two cameras (labeled as A and B). Here, the camera A is called main camera. The cameras (B) functions as a checking device to determine the correct image matching between the main image and the projector, so it is called checking camera. There are three contributions in this dissertation and they are as follows: First, a new camera calibration technique is provided, in which the image center, uncertainty scale factor, camera focal length, rotation matrix, and translation vector can be determined using at least seven noncoplanar calibration points; the orthogonality of rotation matrix can be satisfied not only theoretical but also numerically with actual calibration; all intrinsic and extrinsic parameters can be determined using the same set of data; no assumption is needed for the world coordinate system setup; and no nonlinear techniques are required. Second, a new linear approach for estimating the epipolar lines on the main camera (A), related to the projector, is developed. The existing methods can not guarantee that all image points on the same epipolar line on the main camera, related to the projector, have the same corresponding epipolar line on the projector. This is against the epipolar geometric constraints. The approach developed here can guarantee that all points on the same epipolar line on the main image, related to the projector, must have the same corresponding epipolar line on the projector. Third, two checking point equations are given to determine the correct image matching among the main image, the checking image, and the projector. The methods developed here, only require use of the epipolar lines on the projector, related to the main camera (A). Calibration of the projector is not required. A review of the state of the art is given in the first three chapters. All methods developed here were verified experimentally.Item CCD imaging with a TMS34010 graphics system processor(Texas Tech University, 1990-05) Mueller, Curtis WayneNot availableItem Codebook optimization in vector quantization(Texas Tech University, 1999-12) Zhang, XiaoxiDigital image processing techniques were introduced early this century. One of the first applications was in improving digitized newspaper pictures sent by submarine cable between London and New York in 1920s. It reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours. In 1964, Jet Propulsion Laboratory began using computers to improve image quality [5]. From 1964 until present, the field of image processing has grown vigorously. It has become a prime area of research not only in electrical engineering but also in many other disciplines such as computer science, health science, and geography. However, representing a digitized image may require enormous amount of data. Some images, like medical images, have higher resolution and therefore require even larger amounts of memory. Due to the vast amount of data associated with images and video, compression is a key technology for reducing the amount of data required to represent a digital image. The reason that we can compress a digital image is because there are some data redundancies in the image. When we reduce or eliminate the redundancies, the data is compressed. There are many compression methods and normally, they can be classified into two main categories: lossless and lossy compression. In this thesis, we will focus on Vector Quantization which is a lossy compression. Based on Shannon^ theory, coding systems can perform better if they operate on vectors or group of symbols rather than on individual symbols or samples [9]. The objective of this research was to compress image using LBG-VQ [21] both in spatial domain (The spatial domain algorithm was introduced by Linde, Buzo, and Gary) and in wavelet transformdomain [6] and compare it with other algorithms for vector quantization techniques developed recently.Item Composition-guided image acquisition(2004) Banerjee, Serene; Evans, Brian L.To make a picture more appealing, professional photographers apply a wealth of photographic composition rules, of which amateur photographers are of- ten unaware. This dissertation aims at providing in-camera feedback to the amateur photographer while taking pictures. The proposed algorithms do not depend on prior knowledge of the indoor/outdoor setting or scene, and are amenable to software implementation on fixed-point programmable digital signal processors available in digital still cameras. The key enabling step in automating photographic composition rules is to locate the main subject. Digital still image acquisition maps the 3-D world onto a 2-D picture. By using the 2-D picture alone, segmenting the main subject without prior knowledge of the scene is ill-posed. Even with prior knowledge, segmentation is often computationally intensive and error prone. This dissertation defends the idea that reliable main subject segmenta- tion without prior knowledge of scene and setting may be achieved by acquiring a single picture, in which the optical system blurs objects not in the plane of focus. After segmentation, photographic composition rules may be automated. In this context, segmentation only needs to approximately and not precisely locate the main subject. In this dissertation, I combine optical and digital image processing to perform the segmentation of the main subject without prior knowledge of the scene. In particular, I propose to acquire a picture in which the main subject is in focus, and the shutter aperture is fully open. The lens optics will blur any object not in the plane of focus. For the acquired picture, I develop a computationally simple one-pass algorithm to segment the main subject. The post segmentation objective is to automate selected photographic composition rules. The algorithms can either be applied on the picture taken with the objects not in the plane of focus blurred, or on a user-intended picture with the same focal length settings. This way, in-camera feedback can be provided to the amateur photographer, in the form of alternate compositions of the same scene. I automate three photographic composition rules: (1) placement of the main subject obeying the rule-of-thirds, (2) background blurring to simulate the main subject being in motion or decrease the depth-of-field of the picture, and (3) merger detection and mitigation when equally focused main subject and background objects merge as one object. The primary contributions of the dissertation are in digital still image processing. The first is the automation of segmentation of the main subject in a single still picture assisted by optical pre-processing. The second is the automation of main subject placement, artistic background blur, and merger detection and mitigation to try to improve photographic composition.Item Computer processing of plasmon tomography images(2012-08) Houk, Adam; Grave de Peralta, Luis; Bernussi, Ayrton A.A method to analyze Surface Plasmon Polaritons (SPP) has been commonly used in recent years known as SPP Tomography. This method allows for the creation of back focal plane (BFP) images when thin metals are used for SPP structures. It is the purpose of this paper to discuss the necessary information imparting the knowledge of how one can analyze the BFP images and utilize necessary programming skills in order to create a better method of properly doing this analysis. By allowing creating a Mathematica program that is user friendly those who need to analyze these structures will then have a tool that is useful, which may be more accurate, and quick. A few BFP examples of hexagonal and square lattices are used to test the accuracy of the program that was developed. These structures were fabricated with a known crystal period, and numerical aperture, which allows for calculated comparisons for accuracy. By making this comparison the success of the program is established.Item Digital image processing and spatial frequency analysis of Texas roadway environment(Texas Tech University, 1999-12) Tang, ZhenA report presented on the acquisition, storage, processing and analysis of digital images of both fireants activities and small target visibility, beginning with the general introduction of background knowledge in digital image representation, covering acquisition, storage, enhancement and finally development of methods to extract information of interest from the digital images. Fast Fourier transform and digital image processing techniques are reviewed and utilized.Item Digital image processing for the study of concrete beam cracks(Texas Tech University, 1999-12) Sury, Anis SalaheddinThis thesis presents the research that was carried out to attain these objectives. The second chapter provides an introduction and background information about digital images. It discusses the nature of digital images, the way in which they are stored in the computer, and some background information about color types and color manipulation. Chapter 3 discusses the image processing algorithms that were developed for the purpose of conducting concrete crack studies, and the way in which these algorithms were tested on the digital image of a concrete sidewalk crack. However, for use of the techniques for actual concrete beam cracks, small reinforced concrete beams were built and tested to crack in shear, digital pictures were taken of the cracks, and these images were processed using the developed computer algorithms. Chapter 4 presents the design of the small reinforced concrete beams and their test setup, and Chapter 5 discusses the results obtained from testing these beams. Chapter 6 is dedicated to conclusions and recommendations.Item Edge detection in noisy images using directional diffusion with log filters(Texas Tech University, 1996-08) Liu, ChangImage noise reduction and edge detection have been important image processing techniques. Traditional isotropic image smoothing reduces noise at the cost of image blurring. Anisotropic smoothing tries to maintain the image features, while reducing noise. This thesis presents an anisotropic smoothing implementation that only uses a 3-by-3 window, and therefore is easy to implement in hardware. Edge detection is also studied in this thesis. A pre-processing technique is proposed to do position dependent brightness correction. This technique makes edge thresholding easier. We also present an algorithm that implements Gaussian filtering more accurately than Gauss-Hermite integration at the cost of an insignificant increase in computing complexity.Item Equipment control and computer interfacing using LabVIEW(Texas Tech University, 2000-05) Mahmud, MuhiuzzamanThis thesis describes two control projects undertaken at the Maddox Laboratory of the Texas Tech University. The first project concerns digital image data acquisition from a CCD chip into a PC. The second project deals with the control of a Magnetron Sputtering equipment and its accessories. Both the projects culminate to working LabVIEW programs that automate the controls for both the systems. CCD sensors can be classified broadly into two categories - the linear sensors and the area sensors. The sensor used in the data acquisition project was an area sensor of the "full-frame" type. A matrix of photo-ensitive CCD pixels generate electron packets of varying charge content that is proportional to the amount of incident photons. After a finite integration period, during which photo-generated electronic charges accumulate within the potential well of each CCD photo-site, the whole field of pixels is shifted towards the output node. This shift towards the output node occurs in several steps. First, a Vertical shift clock pulse causes all the rows to be shifted downwards by one position -resulting into the bottom row being shifted to a serial shift register. A series of, Horizontal-Shift clock pulses then shift out the pixels one by one through the shift register into the output diffusion node. The output diffusion is voltage modulated by the charge content in a pixel. This voltage is sensed by an output buffer amplifier. After all the pixels in a row have been shifted out to the floating diffusion at the output and sensed by the output amplifier, a row shift occurs again and the same sequence is repeated.Item Estimation of angular velocity using a single fixed camera(2016-05) Tuggle, Kirsten Elizabeth; Akella, Maruthi Ram, 1972-; Bakolas, EfstathiosVisual systems provide a rich representation of the observed environment and are frequently utilized in navigation and tracking across a broad class of engineering endeavors. This report explores the application of a single camera, fixed in position and orientation, towards the estimation of the angular velocity of an observed rigid object undergoing general motion in three dimensions. A summary is provided for historically significant methods in motion estimation using vision, and the underlying dynamics of the problem are discussed. A particular solution is examined in which the angular velocity is estimated asymptotically via a robust integral of the sign of the error (RISE)-based observer. The observer exploits homography techniques in image processing along with nonlinear systems theory to achieve its goal. Convergence proofs are summarized, and the observer is numerically simulated for a few simple example cases for demonstration. A key advantage of this solution is that it does not require supplementary information in the form of knowledge of a subset of linear velocity components or a true Euclidean length between two identifiable features on the object. This trait is not shared in estimation of linear velocity where some additional knowledge is necessary to resolve scale factor ambiguities.Item Evaluating cotton maturity using fiber cross-sectional images(2016-08) Ouyang, Wenbin, active 2013; Xu, Bugao; Chen, Jonathan Y.The cross-section of a cotton fiber provides a directly fiber geometric description. It is known that analysis on a cross-section image will offer a true measure of fiber wall thickness, and derive an accurate cotton fiber maturity evaluation. The fiber image analysis system (FIAS) has been developed for several years. The previous two versions of FIAS were equipped with a traditional microscope with a limited field of view. And the old algorithms were lack of the ability to detect immature fibers correctly, which yielded a systematic bias in the maturity distribution. In this study, images are captured under a new hardware setup with a wide-field of view and a high-resolution camera. A novel descriptor, coupled-contour model (CCM), is introduced to illustrate the relationship between the inner and outer contours of a cotton fiber cross-section. After the detection of the inner and outer contours, triangle-area representation (TAR) is used to describe the shape of the cross-section, and to determine whether the cross-section needs further processing. For those cross-sections analyzed with adhering, self-rolling, scratched, and contaminated characteristics, a cross-section case by case study is required. By analyzed the algorithm efficiency of the randomly picked 7 cottons in 104, it was found the case by case study did occupy about 30% of the whole processing period. This study investigated all the 104 cottons with 15473 fiber cross-sectional images. By introduced more statistic parameters, including mean (Mq), standard deviation (SDq), skewness (Sq), and kurtosis (Kq), a more comprehensive cotton fiber maturity understanding was achieved. According to the maturity distribution, the 104 cottons are distinguishable and divided into 5 classes, i.e. very low, low, moderate, high, and very high from class I to class V, respectively. The comparison made between AFIS and the current FIAS informs that the maturity distributions of AFIS and FIAS are noticeably different. AFIS tends to generate more normal, less skewed and more concentrated maturity distributions but FIAS provides diversified maturity distributions.
- «
- 1 (current)
- 2
- 3
- »