Browsing by Subject "Fourier transformations"
Now showing 1 - 14 of 14
Results Per Page
Sort Options
Item A digital physiological spectrum analyzer.(Texas Tech University, 1975-05) Wood, Gary BrentNot availableItem Advanced techniques for digital image processing(Texas Tech University, 1986-05) Tarng, Jaw-horngA new algorithm for enhancing a degraded grey scale image is proposed here. The enhancement algorithm is a locally adaptive Fourier filter which locates and analyzes the Fourier spectral information and then enhances the identifying features. Thus, it can achieve a better enhancement result than conventional homomorphic FFT techniques. By using a short space basis implementation, a large amount of memory space can be saved, consequently the computation speed is greatly improved. The primary objective of this algorithm is to extract linear features from a noisy image. However, the algorithm also can be modified in order to enhance other different kinds of features. The main advantages of this algorithm are: 1. It requires a small amount of computer memory; this makes it easy to implement in small computers. 2. It has fast processing speed. 3. It is powerful in extracting local linear features.Item Computational techniques for a generalized Fourier transform(Texas Tech University, 1972-05) Sartain, Robert LeeNot availableItem Digital Enhancement of Degraded Fingerprints(Texas Tech University, 1985-08) Barsallo, Adonis EmmanuelOnce latent fingerprints have been obtained, there are two major operations to be performed; they are enhancement and classification. In this thesis we strictly focus on digital image enhancement, and for this purpose, several techniques for fingerprint enhancement were studied and developed. These techniques may be implemented in either the spatial or spatial frequency domains. Processing techniques in the spatial frequency domain are based on modifying the two-dimensional Fourier transform of the image. The approaches in the spatial domain are based on direct manipulation of the pixels. Since the contrast of latent fingerprints is space-variant, a spatially adaptive technique, in the spatial domain, was studied and developed further. Such a technique was adaptive binarization, which makes use of moving windows with spatially-varying parameters. A double pass using this algorithm improved the fingerprint appearance even further. Other spatial main methods studied were contrast stretching/sliding and the image complement, which provided a better quality of print for the purpose of ridge/valley discrimination. Spatial-frequency domain methods developed included, linear ideal/Butterworth filters, which were individually tested, and a homomorphic filter process, which made use of generalized linear filters. This later method proved effective in removing multiplicative degradations that had been introduced into the latent fingerprint. A one-dimensional fingerprint diffusion model approach led to the development and application of a Laplacian operator, which more accurately describes such a diffusion process, resulting in a fingerprint image where the edges were sharpened.Item Fused floating-point arithmetic for DSP(2009-05) Saleh, Hani Hasan Mustafa, 1970-; Swartzlander, Earl E.Floating-point arithmetic is attractive for the implementation for a variety of Digital Signal Processing (DSP) applications because it allows the designer and user to concentrate on the algorithms and architecture without worrying about numerical issues. In the past, many DSP applications used fixed point arithmetic due to the high cost (in delay, silicon area, and power consumption) of floating-point arithmetic units. In the realization of modern general purpose processors, fused floating-point multiply add units have become attractive since their delay and silicon area is often less than that of a discrete floating-point multiplier followed by a floating point adder. Further the accuracy is improved by the fused implementation since rounding is performed only once (after the multiplication and addition). This work extends the consideration of fused floating-point arithmetic to operations that are frequently encountered in DSP. The Fast Fourier Transform is a case in point since it uses a complex butterfly operation. For a radix-2 implementation, the butterfly consists of a complex multiply and the complex addition and subtraction of the same pair of data. For a radix-4 implementation, the butterfly consists of three complex multiplications and eight complex additions and subtractions. Both of these butterfly operations can be implemented with two fused primitives, a fused two-term dot-product unit and a fused add-subtract unit. The fused two-term dot-product multiplies two sets of operands and adds the products as a single operation. The two products do not need to be rounded (only the sum is normalized and rounded) which reduces the delay by about 15% while reducing the silicon area by about 33%. For the add-subtract unit, much of the complexity of a discrete implementation comes from the need to compare the operand exponents and align the significands prior to the add and the subtract operations. For the fused implementation, sharing the comparison and alignment greatly reduces the complexity. The delay and the arithmetic results are the same as if the operations are performed in the conventional manner with a floating-point adder and a separate floating-point subtracter. In this case, the fused implementation is about 20% smaller than the discrete equivalent.Item Item Image registration using power spectrum and cepstrum techniques(Texas Tech University, 1987-05) Lee, Dah-jyeThe use of power cepstrum analysis in image registration is explored. Rotational shifts and translational shifts are corrected separately. The technique involves two main ideas. First, after pre-processing to remove redundant information and information which could result in false registration parameters, a rotational shift is changed into a translational shift. Second, power cepstrum analysis is used to correct the translational shift. Because of the introduction of these ideas, this new algorithm can work very fast and accurately compared to conventional correlation techniques. The primary objective of this algorithm is to compare fundus photographs taken at different times, thus making possible early detection of some retinal diseases. However, this algorithm also can be used for other applications, such as analyzing aerial photographs. Numerous pictorial examples illustrating the technique as applied to retinal photographs are included throughout this thesis.Item Implementation and comparison of cosine modulated filter banks on a fixed point digital signal processor(Texas Tech University, 2004-05) Bhate, Kedar RavindraCosine Modulated Filter Banks have an efficient structure with respect to the number of multiplication and delay elements required. They also provide another desirable feature, perfect reconstruction (PR). However, their ability to provide PR can be affected due to various parameters, such as fixed-point constraints, imperfect modulation matrix, etc. In this thesis, effects of these parameters on the ability of the filter bank to provide PR are studied. To demonstrate the use of the filter bank in a real- time application, it is implemented using a TMS320C6211 Fixed-point Digital Signal Processor (DSP). The implementation uses the TLC320AD535 audio Encoder/Decoder (CODEC), the Multichannel Buffered Serial Port (McBSP) and the Enhanced Direct Memory Access (EDMA) controller on the DSK6211 board to continuously process and reconstruct digitized audio data.Item Investigations of biological interactions by hydrogen deuterium exchange Fourier transform ion cyclotron mass spectrometry: novel methods, automated analysis and data reduction(2003) Blakney, Gregory Terrell; Laude, David A.Hydrogen/deuterium (H/D) exchange is used to investigate biological interactions by Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS). For the first time, a series of oligonucleotides of varying length are interrogated by negative mode gas phase H/D exchange. Data presented describes the reactivity of these oligonucleotides and correlates reactivity to functional moieties of the model compound. The results of the study are subjected to center of mass analysis, a technique that uses the high mass resolving power of FT-ICR MS to facilitate an isotope-counting / abundance-weighted algorithm to determine deuterium incorporation. A maximum entropy method produces the relative reaction rates for each of the model compounds. Curve fitting of rates reveal >90% correlation between the areas of individual rate curves and available hydrogens. Observed data is consistent with literature reported reaction mechanisms. This work describes the successful implementation of the thorough high resolution analysis of spectra by Horn (THRASH) algorithm for the analysis of electron capture dissociation (ECD) spectra. Speed improvements in THRASH arise from optimized libraries and use of modern processors. The nonergodic nature of ECD alleviates deuterium scrambling and affords improved localization of exchange data. An ECD spectrum of ubiquitin is subjected to automated THRASH analysis resulting in 85% sequence coverage. Automated data analysis is extended to the batch processing of entire high performance liquid chromatography (HPLC) FT-ICR MS experiments. Center of mass calculations are determined by combining the data from multiple scans to maximize signal. Data is output as a single spreadsheet. Results of automated and manual data processing are compared. Complete analysis of a H/D exchange study is completed in two hours, rather than the two months required for manual analysis. Results based on this method are demonstrated for biological interactions of HIV capsid and Nop5-fibrilliarin complex. New FT-ICR hardware increases data station throughput and reduces the amount of back exchange in liquid phase studies. The improved data station shows a three-fold improvement in scan speed and collects scans every 1.25 s. No significant loss of performance results from increased scan rates. Faster scan rates result in better chromatographic resolution and decreased spectral complexity.Item Linear estimation with a continuum of data(Texas Tech University, 1971-05) Ahlers, Carl WilkersonNot availableItem Modular pipeline fast Fourier transform algorithm(2003-05) El-Khashab, Ayman Moustafa; Swartzlander, Earl E.A modular pipeline architecture for computing discrete Fourier transforms (DFT) is demonstrated. For an N point DFT, two conventional pipeline √ N point fast Fourier transform (FFT) modules are joined by a specialized center element. The center element contains memories, multipliers and control logic. Compared with a standard N point pipeline FFT, the modular pipeline FFT reduces the number of delay lines required. Further, the coefficient memory is concentrated within the center element, reducing the storage requirements in each of the conventional FFT modules. The centralized memory and address generator provide the data storv age and reordering. The data throughput of a conventional pipeline architecture is maintained with a slightly higher end-to-end latency. The architecture and control logic for both a radix-2 and radix-4 modular pipeline FFT is explained and compared to the traditional pipeline FFT. Further, this methodology facilitates the hardware computation of long FFTs when compared to previous techniques. The new logic developed to control the FFT unit is similar in complexity to current systems and does not rely on any exotic components or hardware features. In fact, the control logic can be reduced to a single counter and a handful of combinational logic. Specifically, using the modular FFT algorithm reduces the overall complexity of the hardware pipeline, permits the use of reusable modules, and does not impact the throughput. The reduction in delay lines lowers the dynamic power consumption. The hardware architecture is particularly suited to reprogrammable and custom devices. Simulations are conducted to analyze the architecture. Experimental results for both radix-2 and radix-4 FFTs are presented and compared with the conventional pipeline FFT. A numerical analysis of the modular pipeline FFT is performed and compared to that of a conventional pipeline FFT.Item Optical inverse filtering for real-time echo removal(Texas Tech University, 1983-08) Mushtaq, ShehlaNot availableItem Raman study of liquid methyl iodide and methyl fluoride(Texas Tech University, 1986-08) Zyung, TaehyoungThe DqukJs, CH3I and CH3F, were studied using a Raman spectrometer. The polarized and depolarized components of the V2 and V3 bands of CH3I were measured in the whole range of the temperature of the liquid. The recorded spectra were Fourier transformed to provide vibrational autocorrelation functions and memory functions. The vibrational autocorrelation functions and the experimental memory functions were analyzed by the methods of a memory function procedure and an autoregressive analysis, respectively. The Raman spectra of neat liquid CH3F were obtained as a function of temperature and the spectra in the dissolved state were also obtained. The spectra were explained qualitatively by using some theoretical models arid densities of states were calculated classically with a simple theoretical model in order to be compared with the observed spectra.Item Space-variant processing using phase codes and Fourier-plane sampling techniques(Texas Tech University, 1980-05) Kasturi, RangacharAny slowly varying linear space-variant system can, in principle, be represented holographically by spatially sampling the input plane and multiplexing the respective system transfer functions. A scheme reported earlier for implementing this technique makes use of phase diffusers in the reference beam paths to encode sequentially recorded holograms. However to minimize the cross talk between the holograms upon playback the diffusers should have good correlation properties, In this thesis extensive computer simulations to evaluate the correlation properties of a family of binary phase codes are conducted. An alternative multiplexing technique in which the transfer functions are sampled in the Fourier plane to generate a composite hologram is also described. In this technique the samples of the transfer functions are placed in non-overlapping regions and hence there will be no crosstalk upon playback. However multiple copies of the input function are required during the playback step. The results of preliminary experiments conducted to evaluate this approach for space-variant system representation are presented including the verification of coherent addition using computer multiplexed holograms.