Browsing by Subject "Deconvolution"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Algorithms for Fluorescence Lifetime Microscopy and Optical Coherence Tomography Data Analysis: Applications for Diagnosis of Atherosclerosis and Oral Cancer(2014-05-16) Pande, ParitoshWith significant progress made in the design and instrumentation of optical imaging systems, it is now possible to perform high-resolution tissue imaging in near real-time. The prohibitively large amount of data obtained from such high-speed imaging systems precludes the possibility of manual data analysis by an expert. The paucity of algorithms for automated data analysis has been a major roadblock in both evaluating and harnessing the full potential of optical imaging modalities for diagnostic applications. This consideration forms the central theme of the research presented in this dissertation. Specifically, we investigate the potential of automated analysis of data acquired from a multimodal imaging system that combines fluorescence lifetime imaging (FLIM) with optical coherence tomography (OCT), for the diagnosis of atherosclerosis and oral cancer. FLIM is a fluorescence imaging technique that is capable of providing information about auto fluorescent tissue biomolecules. OCT on the other hand, is a structural imaging modality that exploits the intrinsic reflectivity of tissue samples to provide high resolution 3-D tomographic images. Since FLIM and OCT provide complimentary information about tissue biochemistry and structure, respectively, we hypothesize that the combined information from the multimodal system would increase the sensitivity and specificity for the diagnosis of atherosclerosis and oral cancer. The research presented in this dissertation can be divided into two main parts. The first part concerns the development and applications of algorithms for providing quantitative description of FLIM and OCT images. The quantitative FLIM and OCT features obtained in the first part of the research, are subsequently used to perform automated tissue diagnosis based on statistical classification models. The results of the research presented in this dissertation show the feasibility of using automated algorithms for FLIM and OCT data analysis for performing tissue diagnosis.Item Blind equalization of an HF channel for a passive listening system(Texas Tech University, 2006-05) Casey, Ryan B.; Karp, Tanja; Nutter, Brian; Saed, Mohammed; Iyer, Ram V.Wireless channels often distort a signal beyond the point of reliable demodulation. To account for this problem receivers incorporate an equalizer whose objective is to remove the distortion introduced by the medium of transmission. Typically, the equalizer requires knowledge of the transmitted signal to adequately adapt to remove this distortion. This introduces overhead into the signal effectively reducing the data throughput rate. Blind equalizers do not require this prior knowledge of the transmitted signal allowing for the removal of information that is known by the receiver a priori. In most of the research in blind equalizers simulated signals and test channels are used. Little to no attention is paid to using real world transmissions in the algorithms analysis. While modeling is a crucial step in the development of an effcient algorithm it is not the only step. Accurate models accounting for all the disturbances and noise in the real world are to complex to be effcient. One particular case of this is the HF channel. The high frequency (HF) channel is particularly diffcult to blindly equalize because of its noisy environments and it long path delays. This work looks into the use of blind equalizers for HF signals and focuses a large amount of the analysis on real world HF transmissions.Item Blind equalization of an HF channel for a passive listening system(2006-05) Casey, Ryan B.; Karp, Tanja; Iyer, Ram V.; Saed, Mohammed; Nutter, BrianWireless channels often distort a signal beyond the point of reliable demodulation. To account for this problem receivers incorporate an equalizer whose objective is to remove the distortion introduced by the medium of transmission. Typically, the equalizer requires knowledge of the transmitted signal to adequately adapt to remove this distortion. This introduces overhead into the signal effectively reducing the data throughput rate. Blind equalizers do not require this prior knowledge of the transmitted signal allowing for the removal of information that is known by the receiver a priori. In most of the research in blind equalizers simulated signals and test channels are used. Little to no attention is paid to using real world transmissions in the algorithms analysis. While modeling is a crucial step in the development of an effcient algorithm it is not the only step. Accurate models accounting for all the disturbances and noise in the real world are to complex to be effcient. One particular case of this is the HF channel. The high frequency (HF) channel is particularly diffcult to blindly equalize because of its noisy environments and it long path delays. This work looks into the use of blind equalizers for HF signals and focuses a large amount of the analysis on real world HF transmissions.Item Deconvolution of variable rate reservoir performance data using B-splines(Texas A&M University, 2007-04-25) Ilk, DilhanThis work presents the development, validation and application of a novel deconvolution method based on B-splines for analyzing variable-rate reservoir performance data. Variable-rate deconvolution is a mathematically unstable problem which has been under investigation by many researchers over the last 35 years. While many deconvolution methods have been developed, few of these methods perform well in practice - and the importance of variable-rate deconvolution is increasing due to applications of permanent downhole gauges and large-scale processing/analysis of production data. Under these circumstances, our objective is to create a robust and practical tool which can tolerate reasonable variability and relatively large errors in rate and pressure data without generating instability in the deconvolution process. We propose representing the derivative of unknown unit rate drawdown pressure as a weighted sum of Bsplines (with logarithmically distributed knots). We then apply the convolution theorem in the Laplace domain with the input rate and obtain the sensitivities of the pressure response with respect to individual B-splines after numerical inversion of the Laplace transform. The sensitivity matrix is then used in a regularized least-squares procedure to obtain the unknown coefficients of the B-spline representation of the unit rate response or the well testing pressure derivative function. We have also implemented a physically sound regularization scheme into our deconvolution procedure for handling higher levels of noise and systematic errors. We validate our method with synthetic examples generated with and without errors. The new method can recover the unit rate drawdown pressure response and its derivative to a considerable extent, even when high levels of noise are present in both the rate and pressure observations. We also demonstrate the use of regularization and provide examples of under and over-regularization, and we discuss procedures for ensuring proper regularization. Upon validation, we then demonstrate our deconvolution method using a variety of field cases. Ultimately, the results of our new variable-rate deconvolution technique suggest that this technique has a broad applicability in pressure transient/production data analysis. The goal of this thesis is to demonstrate that the combined approach of B-splines, Laplace domain convolution, least-squares error reduction, and regularization are innovative and robust; therefore, the proposed technique has potential utility in the analysis and interpretation of reservoir performance data.Item Explicit deconvolution of wellbore storage distorted well test data(Texas A&M University, 2007-04-25) Bahabanian, OlivierThe analysis/interpretation of wellbore storage distorted pressure transient test data remains one of the most significant challenges in well test analysis. Deconvolution (i.e., the "conversion" of a variable-rate distorted pressure profile into the pressure profile for an equivalent constant rate production sequence) has been in limited use as a "conversion" mechanism for the last 25 years. Unfortunately, standard deconvolution techniques require accurate measurements of flow-rate and pressure ?????? at downhole (or sandface) conditions. While accurate pressure measurements are commonplace, the measurement of sandface flowrates is rare, essentially non-existent in practice. As such, the "deconvolution" of wellbore storage distorted pressure test data is problematic. In theory, this process is possible, but in practice, without accurate measurements of flowrates, this process can not be employed. In this work we provide explicit (direct) deconvolution of wellbore storage distorted pressure test data using only those pressure data. The underlying equations associated with each deconvolution scheme are derived in the Appendices and implemented via a computational module. The value of this work is that we provide explicit tools for the analysis of wellbore storage distorted pressure data; specifically, we utilize the following techniques: * Russell method (1966) (very approximate approach), * "Beta" deconvolution (1950s and 1980s), * "Material Balance" deconvolution (1990s). Each method has been validated using both synthetic data and literature field cases and each method should be considered valid for practical applications. Our primary technical contribution in this work is the adaptation of various deconvolution methods for the explicit analysis of an arbitrary set of pressure transient test data which are distorted by wellbore storage ?????? without the requirement of having measured sandface flowrates.Item Investigation of the upper mantle beneath the Hawaiian Island chain using PP-precursors(2013-08) Rogers, Kenneth D.; Gurrola, Harold; Nagihara, Seiichi; Karlsson, Haraldur R.The Hawaiian hotspot is of great geological significance, but data collection in the area can be challenging due to the water depth around these islands. By using PP bounce point data, with receivers in mainland United States, we analyze the area with a greater wealth of data than possible using data collected locally. The increased amount of data, in addition to new beamforming and iterative deconvolution techniques, has increased the frequency content in PP precursor data, from around the traditional 0.01 Hz to above 5 Hz, enabling us to image to shallower depths and thinner layers than previously possible. Profiles of stacked PP precursors across the island chain were produced along perpendicular lines. Data were stacked in bins 1˚ along the profiles and 4˚ perpendicular to the profile (parallel to the island chain). An additional profile was produced some 10˚ away from the island chain as a control group. The control group shows pairs of high- and low-velocity horizons in the mantle. These may be the base and top of shear zones. These horizons are strongly disrupted near the Hawaiian Island chain. In the lithosphere, low velocity zones are more abundant to the south of the island chain but are less common on the north side. If these indicate melt, the low velocity zones may be blocked by the Islands, which are sinking into the lithosphere. As this study and other recent work imply the hot spot is more active to the southwest of the island chain than to the north, the island chain itself may be causing the crust to warp downward into the mantle and could act as a dam to melt migrating to the north. Furthermore, we believe that the island’s weight downwarping the lithosphere causes a crack to propagate out past the youngest island, which also acts as a dam that keeps most of the melt to the southwest of the island chain.