Browsing by Subject "Signal processing"
Now showing 1 - 20 of 43
Results Per Page
Sort Options
Item A compensation technique for integrator leakage error in sigma-delta modulators(Texas Tech University, 1999-12) Yedevelly, Yeshoda DeviThe common problem faced in many high-resolution Sigma-Delta topologies is their sensitivity to the imperfections of the analog components, especially the integrator. This thesis deals in depth with the physical causes of the deviations in the integrator transfer function 2ind their effects on the Sigma-Delta modulator performance and then proposes a solution for the elimination of the integrator pole error that has been proven as the main error in the integrators. The concept of feedback has been used to eliminate the integrator leakage (pole) error and this concept has been analyzed and also verified by comparing the power spectral densities of two modulators of which, one uses this concept and the other doesn't.Item A multibit cascaded sigma-delta modulator with DAC error cancellation techniques(Texas Tech University, 2004-05) Su, Chun-hsienNoise reduction techniques are developed for a multibit cascaded sigma-delta (ÓÄ) modulator used in the analog interface of a digital signal processing system to improve its performance by reducing the errors introduced by digital-to-analog converters (DACs). The idea of the proposed architecture is to create extra feedback paths around the modulator to reduce the DAC errors further by properly designing the error cancellation logic. Transfer functions show that the DAC error at the final stage of the proposed architecture is totally cancelled, while DAC errors from other internal stages are shaped by an order higher than those in a conventional cascaded modulator. The difficulty in circuit implementation of modulators with high resolution and bandwidth increases due to the imperfection of analog components in VLSI processes. Structural and circuit-level compensation techniques are generally used in developing such modulators. Major analog nonideal effects in a multibit cascaded ÓÄ modulator include coefficient mismatches, DAC nonlinearity errors, and integrator leakages. While providing solutions for each of these nonidealities, this dissertation focuses on the minimization of the DAC error since it causes the most performance deterioration. A configurable fourth-order (2-1-1) ÓÄ modulator is implemented for architecture verification. This modulator can be configured as the proposed architecture as well as a conventional cascaded structure with various modulator orders. The design of the system's parameters and analog blocks are fully described in this dissertation. The system is fabricated by the AMI Semiconductor (AMIS) 0.5ìm double-poly triple-metal mixed- signal process through the MOSIS service. Measurement results show that with on-chip error of ±0.15 LSB for each DAC and an oversampling ratio (OSR) of 32, an improvement of 8dB of the proposed architecture over the conventional structure is observed.Item A real-time microcomputer-based data acquisition and signal processing system for non-invasive cardiac output determination(Texas Tech University, 1989-05) Ling, Hoi-kwongNot availableItem A VLSI optical detector array employing heterodyne detection(Texas Tech University, 1997-05) Soni, Tejvansh SinghThe integration of image sensors with circuitry for driving the image sensor and performing on-chip signal processing is becoming increasingly popular for a multitude of signal processing applications. A high degree of on-chip signal processing helps enable miniaturization of instrument systems and simplify system interfaces. In this work, the design of a powerful and versatile VLSI optical sensor array, with on-chip circuitry to perform temporal electronic heterodyne detection on a pixel-bypixel basis is presented. Heterodyne detection techniques significantly enhance the dynamic range and signal to noise ratio, as compared to base-band detectors. The unavailability of heterodyne detector arrays has been a bottleneck in many imageprocessing systems, restricting the use of heterodyne detection techniques to scanning based systems, or systems having a single temporal output, such as acousto-optic space integrating correlators and convolvers. The need for heterodyne detector arrays in acousto-optics has been emphasized by prominent researchers in the field.Item Adaptive wavelet filter design for digital signal processing systems(Texas Tech University, 2000-12) Kustov, Vadim MichailovichDiscrete wavelet transform has been used in many image/signal processing applications in recent years. However, the design of optimized and adaptive wavelet filter banks is still a significant research topic, specifically in image/signal compression. A number of wavelet-based advanced lossy compression algorithms provide high-fidelity reconstmction of input images at computationally intensive costs. The present work investigates the potential and the limitations of optimized adaptive design of two-channel perfect reconstmction filters when the signal in a channel is subjected to coarse quantization during the encoding process of such advanced compression algorithms. A real-time optimal two-channel perfect reconstmction filter bank design algorithm has been developed and implemented in a digital signal processor. The algorithm has been used in a newly developed execution time reduction method to reduce the computational costs and data storage requirement of image compression algorithms. A reduction of execution time by two to three times has been achieved without adding appreciable distortion to the reconstmcted image.Item An ASIC implementation of the two-dimensional Discrete Cosine Transform(Texas Tech University, 1996-08) Chen, FengNot availableItem Computational process networks : a model and framework for high-throughput signal processing(2011-05) Allen, Gregory Eugene; Evans, Brian L. (Brian Lawrence), 1965-; Browne, James C.; Chase, Craig M.; John, Lizy K.; Loeffler, Charles M.Many signal and image processing systems for high-throughput, high-performance applications require concurrent implementations in order to realize desired performance. Developing software for concurrent systems is widely acknowledged to be difficult, with common industry practice leaving the burden of preventing concurrency problems on the programmer. The Kahn Process Network model provides the mathematically provable property of determinism of a program result regardless of the execution order of its processes, including concurrent execution. This model is also natural for describing streams of data samples in a signal processing system, where processes transform streams from one data type to another. However, a Kahn Process Network may require infinite memory to execute. I present the dynamic distributed deadlock detection and resolution (D4R) algorithm, which permits execution of Process Networks in bounded memory if it is possible. It detects local deadlocks in a Process Network, determines whether the deadlock can be resolved and, if so, identifies the process that must take action to resolve the deadlock. I propose the Computational Process Network (CPN) model which is based on the formalisms of Kahn’s PN model, but with enhancements that are designed to make it efficiently implementable. These enhancements include multi-token transactions to reduce execution overhead, multi-channel queues for multi-dimensional synchronous data, zero-copy semantics, and consumer and producer firing thresholds for queues. Firing thresholds enable memoryless computation of sliding window algorithms, which are common in signal processing systems. I show that the Computational Process Network model preserves the formal properties of Process Networks, while reducing the operations required to implement sliding window algorithms on continuous streams of data. I also present a high-throughput software framework that implements the Computational Process Network model using C++, and which maps naturally onto distributed targets. This framework uses POSIX threads, and can exploit parallelism in both multi-core and distributed systems. Finally, I present case studies to exercise this framework and demonstrate its performance and utility. The final case study is a three-dimensional circular convolution sonar beamformer and replica correlator, which demonstrates the high throughput and scalability of a real-time signal processing algorithm using the CPN model and framework.Item Deep downhole testing: procedures and analysis for high-resolution vertical seismic profiling(2008-05) Li, Songcheng, 1968-; Stokoe, Kenneth H.A study was undertaken to improve the signal quality and the resolution of the velocity profile for deep downhole seismic testing. Deep downhole testing is defined in this research as measurements below 225 m (750 ft). The study demonstrated that current testing procedures can be improved to result in higher signal quality by customizing the excitation frequency of the vibrator to local site conditions of the vibrator-earth system. The earth condition beneath the base plate can be an important factor in the signal quality subject to variations with time when tests are repetitive. This work proposes a convenient method to measure the site localized natural frequency and damping ratio, and recommends using different excitation frequencies for P- and S-wave generation. Properly increasing the excitation duration of the source signal also contributes to the quality of the receiver signal. The source signature of sinusoidal vibratory source is identified. Conventional travel time analysis using vibratory source generally focuses on chirp sweeps. After testing with impulsive sources and chirp sweeps and comparing the results with the durational sinusoidal source, the sinusoidal source was then chosen. This work develops an approach to identifying the source signature of the sinusoidal source and concludes that the normalized source signature is relevant only to four parameters: the fixed-sine excitation frequency, the duration of excitation, the damping ratio of the vibrator-earth system, and the damped natural frequency of the vibrator-earth system. Two of the parameters are designated input to the vibrator and the other two parameters are measured in the field test using the proposed method in this work. A new wavelet-response technique based on deconvolution and consideration of velocity dispersion is explored in travel-time analyses. The wavelet-response technique is also used for development of a new approach to correcting disorientation of receiver tool. The improved downhole procedures and analyses are then used in the analysis of deep downhole test data obtained at Hanford, WA. Downhole testing was performed to a depth of about 420 m (1400 ft) at Hanford site. Improvements in resolving the wave velocity profiles to depths below 300 m (1000) ft are clearly shown.Item Design and implementation of an underwater acoustic transponder(2011-05) Perrine, Kenneth Avery; Evans, Brian L. (Brian Lawrence), 1965-; Hall, Neal A.A transponder for underwater acoustic data communications is prototyped. The mobile transponder emits a data sequence whenever it detects a ping from a base station. The data sequence includes GPS coordinates and UTC time sent over a conservative and brief 12 kbps turbo-coded BPSK link, and a 6 kB JPEG image sent over an ambitious 67 kbps turbo-coded 16-QAM link. The range of the transponder from the base station can also be accurately derived. Several challenges exist in decoding the underwater signals at the base station receiver, including Doppler distortion and multipath. While experimental results show that the ranges for decoding the 16-QAM signals with a single hydrophone are limited to less than 25 m, the BPSK signals prove to be much more robust, decoding at ranges of up to 625 m. Experiments with delays and transducer tether length indicate methods for improving reliability in the presence of reverberation and thermocline. This transponder uses mostly off-the-shelf parts and is anticipated to be improved when paired with advanced sonar array devices.Item Design study of a spatial signal feedback amplifier(Texas Tech University, 1976-05) Bell, Steven VanceNot availableItem Efficient channel estimation for block transmission systems(2006) Shin, Changyong; Powers, Edward J.Item Estimation in signal-dependent noise(Texas Tech University, 1980-12) Froehlich, Gary KarlNot availableItem Extraction of blade-vortex interactions from helicopter transient maneuvering noise(2014-05) Stephenson, James Harold; Tinney, Charles Edmund, 1975-Time-frequency analysis techniques are proposed as a necessary tool for the analysis of acoustics generated by helicopter transient maneuvering flight. Such techniques are necessary as the acoustic signals related to transient maneuvers are inherently unsteady. The wavelet transform is proposed as an appropriate tool, and it is compared to the more standard short-time Fourier transform technique through an investigation using several appropriately sized interrogation windows. It is shown that the wavelet transform provides a consistent spectral representation, regardless of employed window size. The short-time Fourier transform, however, provides spectral amplitudes that are highly dependent on the size of the interrogation window, and so is not an appropriate tool for this situation. An extraction method is also proposed to investigate blade-vortex interaction noise emitted during helicopter transient maneuvering flight. The extraction method allows for the investigation of blade-vortex interactions independent of other sound sources. The method is based on filtering the spectral data calculated through the wavelet transform technique. The filter identifies blade-vortex interactions through their high amplitude, high frequency impulsive content. The filtered wavelet coefficients are then inverse transformed to create a pressure signature solely related to blade-vortex interactions. This extraction technique, along with a prescribed wake model, is applied to experimental data extracted from three separate flight maneuvers performed by a Bell 430 helicopter. The maneuvers investigated include a steady level flight, fast- and medium-speed advancing side roll maneuvers. A sensitivity analysis is performed in order to determine the optimal tuning parameters employed by the filtering technique. For the cases studied, the optimized tuning parameters were shown to be frequencies above 7 main rotor harmonics, and amplitudes stronger than 25% (−6 dB) of the energy in the main rotor harmonic. Further, it is shown that blade-vortex interactions can be accurately extracted so long as the blade-vortex interaction peak energy signal is greater or equal to the energy in the main rotor harmonic. An in-depth investigation of the changes in the blade-vortex interaction signal during transient advancing side roll maneuvers is then conducted. It is shown that the sound pressure level related to blade-vortex interactions, shifts from the advancing side, to the retreating side of the vehicle during roll entry. This shift is predicted adequately by the prescribed wake model. However, the prescribed wake model is shown to be inadequate for the prediction of blade-vortex interaction miss distance, as it does not respond to the roll rate of the vehicle. It is further shown that the sound pressure levels are positively linked to the roll rate of the vehicle. Similar sound pressure level directivities and amplitudes can be seen when vehicle roll rates are comparable. The extraction method is shown to perform admirably throughout each maneuver. One limitation with the technique is identified, and a proposal to mitigate its effects is made. The limitation occurs when the main rotor harmonic energy drops below an arbitrary threshold. When this happens, a decreased spectral amplitude is required for filtering; which leads to the extraction of high frequency noise unrelated to blade-vortex interactions. It is shown, however, that this occurs only when there are no blade-vortex interactions present. Further, the resulting sound pressure level is identifiable as it is significantly less than the peak blade-vortex interaction sound pressure level. Thus the effects of this limitation are shown to be negligible.Item Extrapolation of band-limited signals(Texas Tech University, 1995-05) Vuyyuru, SameerThis thesis is the presentation of an investigation into a classical bandlimited function extrapolation technique, namely the Papoulis-Gerchberg algorithm. This algorithm can be used in the field of RADAR signal processing to estimate the Time-Varying Target Density function of a moving target at different points in space in accordance with methods developed by Dr. Emre. The estimation of the Target Density function yields a scaled estimate of the effective cross-section of the target at that point. With all spatial points combined we would receive a picture of the target space at each time point. This would form, in effect, a real time image of the target space. The study consists of exploring the capabilities of the Papoulis-Gerchberg algorithm, thereby evaluating its possible use in the algorithm for estimating the Time-Varying Target Density function. The study also explores the effectiveness of the Papoulis-Gerchberg algorithm when applied to a bandlimited Target Density function. The two algorithms were simulated using MATLAB and the results were generated as comparisons between MATLAB charts.Item Generalized optical linear filtering of one-dimensional signals(Texas Tech University, 1985-08) Blanchard, Lorena AnnNot availableItem Heterogeneous Reservoir Characterization Utilizing Efficient Geology Preserving Reservoir Parameterization through Higher Order Singular Value Decomposition (HOSVD)(2015-01-21) Afra, SardarPetroleum reservoir parameter inference is a challenging problem to many of the reservoir simulation work flows, especially when it comes to real reservoirs with high degree of complexity and non-linearity, and high dimensionality. In fact, the process of estimating a large number of unknowns in an inverse problem lead to a very costly computational effort. Moreover, it is very important to perform geologically consistent reservoir parameter adjustments as data is being assimilated in the history matching process, i.e., the process of adjusting the parameters of reservoir system in order to match the output of the reservoir model with the previous reservoir production data. As a matter of fact, it is of great interest to approximate reservoir petrophysical properties like permeability and porosity while reparameterizing these parameters through reduced-order models. As we will show, petroleum reservoir models are commonly described by in general complex, nonlinear, and large-scale, i.e., large number of states and unknown parameters. Thus, having a practical approach to reduce the number of reservoir parameters in order to reconstruct the reservoir model with a lower dimensionality is of high interest. Furthermore, de-correlating system parameters in all history matching and reservoir characterization problems keeping the geological description intact is paramount to control the ill-posedness of the system. In the first part of the present work, we will introduce the advantages of a novel parameterization method by means of higher order singular value decomposition analysis (HOSVD). We will show that HOSVD outperforms classical parameterization techniques with respect to computational and implementation cost. It also, provides more reliable and accurate predictions in the petroleum reservoir history matching problem due to its capability to preserve geological features of the reservoir parameter like permeability. The promising power of HOSVD is investigated through several synthetic and real petroleum reservoir benchmarks and all results are compared to that of classic SVD. In addition to the parameterization problem, we also addressed the ability of HOSVD in producing accurate production data comparing to those of original reservoir system. To generate the results of the present work, we employ a commercial reservoir simulator known as ECLIPSE. In the second part of the work, we will address the inverse modeling, i.e., the reservoir history matching problem. We employed the ensemble Kalman filter (EnKF) which is an ensemble-based characterization approach to solve the inverse problem. We also, integrate our new parameterization technique into the EnKF algorithm to study the suitability of HOSVD based parameterization for reducing the dimensionality of parameter space and for estimating geologically consistence permeability distributions. The results of the present work illustrates the characteristics of the proposed parameterization method by several numerical examples in the second part including synthetic and real reservoir benchmarks. Moreover, the HOSVD advantages are discussed by comparing its performance to the classic SVD (PCA) parameterization approach. In the first part of the present work, we will introduce the advantages of a novel parameterization method by means of higher order singular value decomposition analysis (HOSVD). We will show that HOSVD outperforms classical parameterization techniques with respect to computational and implementation cost. It also, provides more reliable and accurate predictions in the petroleum reservoir history matching problem due to its capability to preserve geological features of the reservoir parameter like permeability. The promising power of HOSVD is investigated through several synthetic and real petroleum reservoir benchmarks and all results are compared to that of classic SVD. In addition to the parameterization problem, we also addressed the ability of HOSVD in producing accurate production data comparing to those of original reservoir system. To generate the results of the present work, we employ a commercial reservoir simulator known as ECLIPSE. In the second part of the work, we will address the inverse modeling, i.e., the reservoir history matching problem. We employed the ensemble Kalman filter (EnKF) which is an ensemble-based characterization approach to solve the inverse problem. We also, integrate our new parameterization technique into the EnKF algorithm to study the suitability of HOSVD based parameterization for reducing the dimensionality of parameter space and for estimating geologically consistence permeability distributions. The results of the present work illustrate the characteristics of the proposed parameterization method by several numerical examples in the second part including synthetic and real reservoir benchmarks. Moreover, the HOSVD advantages are discussed by comparing its performance to the classic SVD (PCA) parameterization approach.Item High-performance [delta sigma] analog-to-digital conversion(2008-05) Tsang, Robin Matthew, 1979-; Valvano, Jonathan W., 1953-This dissertation is about a new [delta sigma] analog-to-digital converter that offers enhanced quantization noise suppression at low oversampling ratios. This feature makes the converter attractive in applications where speed and resolution are simultaneously demanded. The converter exploits double-sampling for speed, and takes advantage of a new loop-filter to pin down passband quantization noise. A proto-type is fabricated in 0.18-[mu]m CMOS and tested. Results show that at 200-MS/s, the converter achieves an effective number of bits (ENOB) of 12.2-b in a 12.5-MHz signal band while consuming 89-mW from a 1.8-V supply. Using a common performance metric that takes into account of ENOB and signal bandwidth, the prototype outperforms all previously-reported IEEE switched-capacitor [delta sigma] modulators.Item Image compression in signal-dependent noise(Texas Tech University, 1995-08) Shahnaz, RubeenaThe performance of an image compression scheme is affected by the presence of noise in an image. This work mainly investigates the effects of signal-dependent noise on image compression using the JPEG image compression algorithm. Simulation results show that the achievable compression is significantly reduced in the presence of noise. The types of noise considered are, signal-independent additive noise, signal-dependent film-grain noise and speckle noise. For improvement of compression ratios noisy images are pre-processed for noise suppression before applying compression. Two approaches are used for reduction of signal-dependent noise prior to compression. In one approach estimator designed specifically for a particular signal-dependent noise model is used on the noise degraded image for noise suppression. In the second approach the signal-dependent noise is transformed into signal-independent noise using a homomorphic transformation. An estimator designed for signal-independent noise is then used on the transformed image for noise suppression followed by an inverse homomorphic transformation. The performances of these two pre-compression noise suppression schemes are compared using different performance criteria. Simulation results show that pre-compression noise suppression significantly increases the amount of compression obtained subsequently. The compression results for the noiseless, noisy and restored images are compared.Item The importance of sediment roughness on the reflection coefficient for normal incidence reflections(2011-05) Hron, Joel Maurice; Isakson, Marcia J.; Ezekoye, Ofodike A.This research experimentally shows the effect of sediment roughness characteristics on the acoustic reflection coefficient. This information is useful when trying to classify various types of sediment over an area. This research was conducted in an indoor laboratory tank at Applied Research Laboratories (ARL) at the University of Texas at Austin. A single beam echo-sounder (SBES) system was developed to project and receive a wideband (3 kHz to 30 kHz) acoustic pulse. A method was developed using the system transfer function to create a custom pulse that would minimize the dynamic range over the wide frequency band. A matched filtering and data processing algorithm was developed to analyze data over the full frequency bandwidth and over smaller frequency bands. Analysis over the smaller frequency bands showed the effect of the roughness on the reflection coefficient with respect to frequency. It was found that the reflection coefficient is significantly lower at the higher frequencies (above 20 kHz) than at the lower frequenices [sic] due to off specular scattering. It was also found that the variability of the reflection coefficient was significantly higher for the rough sediment than for the smooth sediment.Item Incorporation of the Global Positioning System modernization signals into existing smoother-based ephemeris generation processes(2008-05) Harris, Robert B., Ph. D.; Lightsey, E. GlennThe introduction of M-Code to the GPS signal structure can redefine the accuracy of the broadcast ephemeris. Existing ephemeris generation systems use dual frequency observations, obtained through the tracking of existing precise codes on the L1 and L2 frequencies. These codes are modulated using Binary Phase Shift Key (BPSK) modulation. The modernization signal M-Code is modulated using Binary Offset Carrier (BOC) modulation. In this study pseudorange observables derived from the tracking of M-Code are proven to have greater accuracy than those from existing precise codes, given equivalent receiver designs and operating conditions. In addition, the error due to specular multipath is derived. These general models of noise and multipath can be applied to any BOC modulated signals, including Galileo and QZSS. When applied to M-Code, the models predict that the maximum multipath error in the pseudorange is reduced in magnitude by 50% compared to the existing precise codes. However the range of multipath delays for which M-Code observables exhibit multipath is approximately twice that associated with existing precise BPSK codes. Existing ephemeris generation processes use the ionosphere free combination and carrier phase smoothing of the pseudorange to form smoothed pseudoranges. The smoothed pseudoranges are then input as measurements to an ephemeris filter. The analytic models of multipath error in the pseudorange and carrier phase observables are applied to predict errors in the smoothed pseudorange. Multipath error, amplified by ionosphere free combination, causes a bias in the smoothed pseudorange when parameterized as a function of multipath delay. There are conditions under which the bias is zero mean, and in those conditions multipath is suppressed. The mechanism for those conditions is solved and discussed, for both BOC and BPSK signal tracking. The solution of carrier phase multipath for BOC modulated signals also admits solutions with a special quality not seen in the BPSK solution. There are multipath delays for which the carrier phase multipath is identically zero regardless of the multipath phase. The zero carrier phase multipath condition may be the most promising feature associated with observables derived from BOC modulated codes.
- «
- 1 (current)
- 2
- 3
- »