Browsing by Subject "Calibration"
Now showing 1 - 17 of 17
Results Per Page
Sort Options
Item An Energy Analysis Of A Large, Multipurpose Educational Building In A Hot Climate(2012-02-14) Kamranzadeh, VahidehIn this project a steady-state building load for Constant Volume Terminal Reheat (CVTR), Dual Duct Constant Volume (DDCV) and Dual Duct Variable Air Volume (DDVAV) systems for the Zachry Engineering Building has been modeled. First, the thermal resistance values of the building structure have been calculated. After applying some assumptions, building characteristics were determined and building loads were calculated using the diversified loads calculation method. By having the daily data for six months for the Zachry building, the input to the CVTR, DDCV and DDVAV Microsoft Excel code were prepared for starting the simulation. The air handling units for the Zachry building are Dual Duct Variable Air Volume (DDVAV) systems. The calibration procedure has been used to compare the calibration signatures with characteristic signatures in order to determine which input variables need to be changed to achieve proper calibration. Calibration signatures are the difference between measured energy consumption and simulated energy consumption as a function of temperature. Characteristic signatures are the energy consumption as a function of temperature obtained by changing the value of input variables of the system. The base simulated model of the DDVAV system has been changed according to the characteristic signatures of the building and adjusted to get the closest result to the measured data. The simulation method for calibration could be used for energy audits, improving energy efficiency, and fault detection. In the base model of DDVAV, without any changes in the input, the chilled water consumption had an Root Mean Square Error (RMSE) of 56.705577 MMBtu/day and an Mean Bias Error (MBE) of 45.763256 MMBtu/day while hot water consumption had an RMSE of 1.9072574 MMBtu/day and an MBE of 45.763256 MMBtu/day. In the calibration process, system parameters such as zone temperature, cooling coil temperature, minimum supply air and minimum outdoor air have been changed. The decisions for varying the parameters were based on the characteristic signatures provided in the project. After applying changes to the system parameters, RMSE and MBE for both hot and cold water consumption were significantly reduced. After changes were applied, chilled water consumption had an RMSE of 12.749868 MMBtu/day and an MBE of 3.423188 MMBtu/day, and hot water consumption had an RMSE of 1.6790 MMBtu/day and an MBE 0.12513 of MMBtu/day.Item Application of digital calibration technique on global bidirectional interconnects in integrated circuit(2014-12) Saetow, Anuwat; Pan, David Z.The trend to integrate more and more processing cores and memory cores into a single module has increased the overall size of chips to the point where global interconnects between sub-units are becoming harder and harder to route and meet timing rules and requirements. The traditional way of routing interconnects and the use of uniform, unidirectional, point to point busses may no longer be optimal for certain designs where metal layers and chip area for interconnects are limited. The need for a more flexible routing methodology is necessary and can be achieved by using routing and calibration techniques currently being implemented at board level design. This report proposes the use of non-uniform, bidirectional, and possibly multi-point loads global interconnects within a single chip module through the use of on chip calibration techniques to compensate for less restrictive wiring rules for certain chip designs. This report will also apply a widely used digital calibration technique to simulate the implementation on a field programmable gate array.Item Calibrated Continuous-Time Sigma-Delta Modulators(2010-07-14) Lu, Cho-YingTo provide more information mobility, many wireless communication systems such as WCDMA and EDGE in phone systems, bluetooth and WIMAX in communication networks have been recently developed. Recent efforts have been made to build the allin- one next generation device which integrates a large number of wireless services into a single receiving path in order to raise the competitiveness of the device. Among all the receiver architectures, the high-IF receiver presents several unique properties for the next generation receiver by digitalizing the signal at the intermediate frequency around a few hundred MHz. In this architecture, the modulation/demodulation schemes, protocols, equalization, etc., are all determined in a software platform that runs in the digital signal processor (DSP) or FPGA. The specifications for most of front-end building blocks are relaxed, except the analog-to-digital converter (ADC). The requirements of large bandwidth, high operational frequency and high resolution make the design of the ADC very challenging. Solving the bottleneck associated with the high-IF receiver architecture is a major focus of many ongoing research efforts. In this work, a 6th-order bandpass continuous time sigma-delta ADC with measured 68.4dB SNDR at 10MHz bandwidth to accommodate video applications is proposed. Tuned at 200 MHz, the fs/4 architecture employs an 800 MHz clock frequency. By making use of a unique software-based calibration scheme together with the tuning properties of the bandpass filters developed under the umbrella of this project, the ADC performance is optimized automatically to fulfill all requirements for the high-IF architecture. In a separate project, other critical design issues for continuous-time sigma-delta ADCs are addressed, especially the issues related to unit current source mismatches in multi-level DACs as well as excess loop delays that may cause loop instability. The reported solutions are revisited to find more efficient architectures. The aforementioned techniques are used for the design of a 25MHz bandwidth lowpass continuous-time sigma-delta modulator with time-domain two-step 3-bit quantizer and DAC for WiMAX applications. The prototype is designed by employing a level-to-pulse-width modulation (PWM) converter followed by a single-level DAC in the feedback path to translate the typical digital codes into PWM signals with the proposed pulse arrangement. Therefore, the non-linearity issue from current source mismatch in multi-level DACs is prevented. The jitter behavior and timing mismatch issue of the proposed time-based methods are fully analyzed. The measurement results of a chip prototype achieving 67.7dB peak SNDR and 78dB SFDR in 25MHz bandwidth properly demonstrate the design concepts and effectiveness of time-based quantization and feedback. Both continuous-time sigma-delta ADCs were fabricated in mainstream CMOS 0.18um technologies, which are the most popular in today?s consumer electronics industry.Item Calibration study on a prototype of the Muon Telescope Detector at STAR of RHIC(2011-12) Li, Liang, master of arts in physics; Hoffmann, Gerald W.; Keto, JohnA prototype of the Muon Telescope Detector (MTD) was installed at STAR (Solenoidal Tracker at RHIC) during run year 2007. While the cosmic and beam tests showed a ~ 60 - 70 ps timing resolution for MTD, the actual performance in Au + Au collisions at STAR was found ~300 ps. In run year 10 STAR implemented a new electronics system for MTD and a cosmic ray trigger to study the performance of its several subsystems. With the cosmic ray data, this study shows that the timing resolution of MTD can reach 99 ps after a full calibration.Item Designs and calibration of delay-line based ADCs(2015-12) Lee, Hsun-Cheng; Abraham, Jacob A.; Gharpurey, Ranjit; Orshansky, Michael; Sun, Nan; Zhang, ChaomingDelay line ADCs become more and more attractive with technology scaling to smaller dimensions with lower voltages. Time domain resolution can be increased by high speed delay cells. A GHz sampling rate can be easily achieved with low power. However, linearity, which has always been an issue, becomes a problem with longer delay lines. Resolutions of reported delay-line ADCs are hardly more than 4 bits with sampling rates of hundreds of MHz. Thus, this dissertation addresses the linearity issue of delay line ADCs. First, a novel 11-bit hybrid ADC using flash and delay line architectures, where a 4-bit flash ADC is followed by a 7-bit delay-line ADC, is proposed. In this structure, the noise/error of the second stage delay-line ADC is attenuated at the hybrid ADC output, such that the overall performance would not be limited by the poor linearity of the delay-line ADC. The achieved figure of merit (FOM) of 33.8 fJ/conversion-step is competitive with state-of-the-art ADCs. Furthermore, the proposed ADC inherits accuracy and high speed from the flash ADC and the delay-line ADC, respectively. The inherited advantages strongly support the scalability of the proposed ADC to provide a better performance with low power in further scaled fabrication processes. Second, in order to remove the harmonic distortion of delay-line ADC, we present a technique which extends harmonic distortion correction (HDC) to digitally calibrate a delay-line ADC. In our simulation results, digital calibration improves SNDR from 25.6 dB to 42.5 dB by averaging $2^{27}$ sample points, which corresponds to a 0.86 second calibration time. Last, a multiple-pass delay line ADC is proposed to improve overall ADC performance in terms of speed and resolution. In this structure, a multiple-pass delay cell can be early triggered by the previous cell to increase speed. Also, phase interpolation is used to improve the effective number of bits. The design is designed and simulated in a commercial 40nm process technology. With 500MHz sampling rate, the multiple-pass delay line ADC achieves an SNDR of 37 dB and consumes 4.2 mW, which is competitive with other reported ADCs.Item Development of a method for model calibration with non-normal data(2002-05) Wang, Dongyuan; Gilbert, Robert B. (Robert Bruce), 1965-Item Estimating the effects of lens distortion on serial section electron microscopy images(2012-08) Lindsey, Laurence Francis; Harris, Kristen M.; Bovik, Alan C. (Alan Conrad), 1958-Section to section alignment is a preliminary step to the creation of three dimensional reconstructions from serial section electron micrographs. Typically, the micrograph of one section is aligned to its neighbors by analyzing a set of fiducial points to calculate an appropriate polynomial transform. This transform is then used to map all of the pixels of the micrograph into alignment. Such transforms are usually linear or piecewise linear in order to limit the accumulation of small errors, which may occur with the use of higher–order approximations. Linear alignment is unable to correct common higher order geometric distortions, such as lens distortion in the case of TEM, and scan distortion in the case of transmission-mode SEM. Here, we attempt to show that standard calibration replicas may be used to calculate a high order distortion model despite the irregularities that are often present in them. We show that SEM scan distortion has much less of an effect than TEM lens distortion; however, the effect of TEM distortion on prior geometric measurements made over three-dimensional reconstructions of dendrites, axons, and synapses and their subcellular compartments is negligible.Item Experimental investigation of film cooling and thermal barrier coatings on a gas turbine vane with conjugate heat transfer effects(2013-05) Kistenmacher, David Alan; Bogard, David G.In the United States, natural gas turbine generators account for approximately 7% of the total primary energy consumed. A one percent increase in gas turbine efficiency could result in savings of approximately 30 million dollars for operators and, subsequently, electricity end-users. The efficiency of a gas turbine engine is tied directly to the temperature at which the products of combustion enter the first stage, high-pressure turbine. The maximum operating temperature of the turbine components’ materials is the major limiting factor in increasing the turbine inlet temperature. In fact, current turbine inlet temperatures regularly exceed the melting temperature of the turbine vanes through advanced vane cooling techniques. These cooling techniques include vane surface film cooling, internal vane cooling, and the addition of a thermal barrier coating (TBC) to the exterior of the turbine vane. Typically, the performance of vane cooling techniques is evaluated using the adiabatic film effectiveness. However, the adiabatic film effectiveness, by definition, does not consider conjugate heat transfer effects. In order to evaluate the performance of internal vane cooling and a TBC it is necessary to consider conjugate heat transfer effects. The goal of this study was to provide insight into the conjugate heat transfer behavior of actual turbine vanes and various vane cooling techniques through experimental and analytical modeling in the pursuit of higher turbine inlet temperatures resulting in higher overall turbine efficiencies. The primary focus of this study was to experimentally characterize the combined effects of a TBC and film cooling. Vane model experiments were performed using a 10x scaled first stage inlet guide vane model that was designed using the Matched Biot Method to properly scale both the geometrical and thermal properties of an actual turbine vane. Two different TBC thicknesses were evaluated in this study. Along with the TBCs, six different film cooling configurations were evaluated which included pressure side round holes with a showerhead, round holes only, craters, a novel trench design called the modified trench, an ideal trench, and a realistic trench that takes manufacturing abilities into account. These film cooling geometries were created within the TBC layer. Each of the vane configurations was evaluated by monitoring a variety of temperatures, including the temperature of the exterior vane wall and the exterior surface of the TBC. This study found that the presence of a TBC decreased the sensitivity of the thermal barrier coating and vane wall interface temperature to changes in film coolant flow rates and changes in film cooling geometry. Therefore, research into improved film cooling geometries may not be valuable when a TBC is incorporated. This study also developed an analytical model which was used to predict the performance of the TBCs as a design tool. The analytical prediction model provided reasonable agreement with experimental data when using baseline data from an experiment with another TBC. However, the analytical prediction model performed poorly when predicting a TBC’s performance using baseline data collected from an experiment without a TBC.Item History matching by simultaneous calibration of flow functions(2007) Barrera, Alvaro Enrique, 1974-; Srinivasan, SanjayReliable predictions of reservoir flow response corresponding to various recovery schemes require a realistic geological model of heterogeneity and an understanding of its relationship with the flow properties. This dissertation presents results on the implementation of a novel approach for the integration of dynamic data into reservoir models that combines stochastic techniques for simultaneous calibration of geological models and multiphase flow functions associated with porelevel spatial representations of porous media. In this probabilistic approach, a stochastic simulator is used to model the spatial distribution of a discrete number of rock types identified by rock/connectivity indexes (CIs). Each CI corresponds to a particular pore network structure with a characteristic connectivity. Primary drainage and imbibition displacement processes are modeled on the 3-D pore networks to generate multiphase flow functions corresponding to networks with different CIs. During history matching, the stochastic simulator perturbs the spatial distribution of the CIs to match the simulated pressures and flow rates to historic data, while preserving the geological model of heterogeneity. This goal is accomplished by applying a probabilistic approach for gradual deformation of spatial distribution of rock types characterized by different CIs. Perturbation of the CIs in turn results in the update of all the flow functions including the effective permeability, porosity of the rock, the relative permeabilities and capillary pressure. The convergence rate of the proposed method is comparable to other current techniques with the distinction of enabling consistent updates to all the flow functions. The resultant models are geologically consistent in terms of all the flow functions, and consequently, predictions obtained using these models are likely to be more accurate. To compare and contrast this comprehensive approach to reservoir modeling against other approaches that rely on modeling and perturbing only the permeability field, a realistic case study is presented with implementation of both approaches. Comparison is made with the history-matched model obtained only by perturbing permeability. It is argued that reliable predictions of future production can only be made when the entire suite of flow functions is consistent with the real reservoir.Item Improved inhalation therapies of brittle powders(2013-12) Carvalho, Simone Raffa; Williams, Robert O., 1956-Advancements in pulmonary drug delivery technologies have improved the use of dry powder inhalation therapy to treat respiratory and systemic diseases. Despite remarkable improvements in the development of dry powder inhaler devices (DPIs) and formulations in the last few years, an optimized DPI system has yet to be developed. In this work, we hypothesize that Thin Film Freezing (TFF) is a suitable technology to improve inhalation therapies to treat lung and systemic malignancies due to its ability to produce brittle powder with optimal aerodynamic properties. Also, we developed a performance verification test (PVT) for the Next Generation Cascade Impactor (NGI), which is one of the most important in vitro characterization methods to test inhalation. In the first study, we used TFF technology to produce amorphous and brittle particles of rapamycin, and compared the in vivo behavior by the pharmacokinetic profiles, to its crystalline counterpart when delivered to the lungs of rats via inhalation. It was found that TFF rapamycin presented higher in vivo systemic bioavailability than the crystalline formulation. Subsequently, we investigated the use of TFF technology to produce triple fixed dose therapy using formoterol fumarate, tiotropium bromide and budesonide as therapeutic drugs. We investigated applications of this technology to powder properties and in vitro aerosol performance with respect to single and combination therapy. As a result, the brittle TFF powders presented superior properties than the physical mixture of micronized crystalline powders, such as excellent particle distribution homogeneity after in vitro aerosolization. Lastly, we developed a PVT for the NGI that may be applicable to other cascade impactors, by investigating the use of a standardized pressurized metered dose inhaler (pMDI) with the NGI. Two standardized formulations were developed. Formulations were analyzed for repeatability and robustness, and found not to demonstrate significant differences in plate deposition using a single NGI apparatus. Variable conditions were introduced to the NGI to mimic operator and equipment failure. Introduction of the variable conditions to the NGI was found to significantly adjust the deposition patterns of the standardized formulations, suggesting that their use as a PVT could be useful and that further investigation is warranted.Item Improving college students’ self-knowledge through engagement in a learning frameworks course(2016-05) Stano, Nancy Kathleen; Schallert, Diane L.; Weinstein, Claire E.; Acee, Taylor W.; Cawthon, Stephanie; Whittaker, TiffanyThis study tested hypotheses about the accuracy of students’ strategic learning self-assessments using a sample of students enrolled in an undergraduate learning frameworks course at a highly competitive research institution. Previous studies demonstrated that learning frameworks courses significantly improve grade point averages, semester-to-semester retention rates, and graduate rates (Weinstein et al., 1997; Weinstein, 1994). Less is known, however, about changes that happen during the semester. Researchers have found that students tend to overestimate their academic abilities (Miller & Geraci, 2011), but that improving participant skill levels increases their ability to recognize the limitations of their abilities (Kruger & Dunning, 2009). This study built on the existing learning frameworks and calibration literatures and addressed the following research questions: Does students’ calibration accuracy improve from the beginning to the end of a semester-long strategic learning course (a type of learning frameworks course)? Does generation status influence calibration? What is the relationship between an individual’s theory of intelligence and their strategic learning calibration? And, is there a relationship between accurate self-assessment and demographic factors such as family income and ethnicity? The methods used in this study included self and objective assessments of strategic learning for 10 learning factors known to impact student success. Based on the Model of Strategic Learning (Weinstein, Acee, Jung, & Dearman, 2009), these 10 factors were assessed by the Learning and Study Strategies Inventory, 2nd Edition (LASSI) (Weinstein & Palmer, 2002). I used mixed ANOVA and regression analyses to identify how accurate students were at the beginning of the semester, how accurate they were at the end of the semester, if this difference was significant, and if other factors – a student’s theory of intelligence, parental education level, family income, and ethnicity – were related to the accuracy of these self assessments. I was particularly interested in the extent to which the least strategic students became more accurate in their self-assessments. Overall, three key findings emerged from the current study: 1) Students’ initial self-assessments were inaccurate and, for the most part, students overestimated their actual strategic learning capabilities, 2) self-assessments are amenable to change and accuracy can improve within a learning frameworks course, even among the least strategic learners in this sample, and 3) parental education level was associated with actual level of strategic learning for some factors at the beginning of the semester, but by the end of the semester, it was no longer a significant predictor. The relationship between the accuracy of student’s self assessments and selected personal demographic factors (income and ethnicity) and their theory of intelligence were mixed.Item Intercomparison of instrumentation systems for verification of ¹²⁵I brachytherapy source strength for use in radioactive seed localization procedures(2015-05) Metyko, John Patrick; Landsberger, Sheldon; Erwin, William D.Two different radiation detection instruments, both commonly found in nuclear medicine clinics, were investigated for potential use in ¹²⁵I brachytherapy seed source strength verification. The goal of this investigation was to determine if either or both of these instruments could replace the air-communicating well-type ionization chamber (standard source strength verification instrument) when the ¹²⁵I seed is used for radioactive seed localization procedures instead of brachytherapy. In radioactive seed localization, the ¹²⁵I seed merely localizes the tissue of interest and does not deliver a therapeutic dose to the patient. The ¹²⁵I seeds are inserted into nonpalpable lesions, which are then removed for biopsy within 5 days. Dose calculations and patient modeling are not performed. As a result of this, stringent source strength accuracy tolerances are not necessary. The accuracy, precision, and reproducibility of an activity calibrator and an ionization chamber survey meter were assessed and compared to regulatory requirements.Item Radiometric calibration of high resolution UAVSAR data over hilly, forested terrain(2010-12) Riel, Bryan Valmote; Buckley, Sean M.; Simard, MarcSAR backscatter data contain both geometric and radiometric distortions due to underlying topography and the radar viewing geometry. Thus, applications using SAR backscatter data for deriving various scientific products (e.g. above ground biomass) require accurate absolute radiometric calibration. The calibration process involves estimation of the local radar scattering area through knowledge of the imaged terrain, which is often obtained through DEMs. High resolution UAVSAR data over a New Hampshire boreal forest test site was radiometrically calibrated using a low resolution SRTM DEM, and different calibration methods were tested and compared. Heteromorphic methods utilizing DEM integration are able to model scattering area better than homomorphic methods based on the local incidence or projection angle with a resultant backscatter calibration difference of less than 0.5 dB. Additionally, the impact of low DEM resolution on the calibration was investigated through a Fourier analysis of different topographic classes. Power spectra of high-resolution airborne lidar DEMs were used to characterize the topography of steep, moderate, and flat terrain. Thus, errors for a given low resolution DEM associated with a particular topographic class could be quantified through a comparison of its power spectrum with that from the lidar. These errors were validated by comparing DEM slope derived from SRTM and lidar DEMs. The impact of radiometric calibration on the biomass retrieval capabilities of UAVSAR data was investigated by fitting second-order polynomials to backscatter vs. biomass plots for the HH, HV, and VV polarizations. LVIS RH50 values were used to calculate biomass, and the process was repeated for both uncalibrated and area calibrated UAVSAR images. The calibration improved the $R^2$ values for the polynomial fits by 0.7-0.8 for all three polarizations but had little effect on the polynomial coefficients. The Fourier method for predicting DEM errors was used to predict biomass errors due to the calibration. It was revealed that the greatest errors occurred in the near range of the SAR image and on slopes facing towards the radar.Item Simultaneous calibration of a microscopic traffic simulation model and OD matrix(Texas A&M University, 2006-10-30) Kim, Seung-JunWith the recent widespread deployment of intelligent transportation systems (ITS) in North America there is an abundance of data on traffic systems and thus an opportunity to use these data in the calibration of microscopic traffic simulation models. Even though ITS data have been utilized to some extent in the calibration of microscopic traffic simulation models, efforts have focused on improving the quality of the calibration based on aggregate form of ITS data rather than disaggregate data. In addition, researchers have focused on identifying the parameters associated with car-following and lane-changing behavior models and their impacts on overall calibration performance. Therefore, the estimation of the Origin-Destination (OD) matrix has been considered as a preliminary step rather than as a stage that can be included in the calibration process. This research develops a methodology to calibrate the OD matrix jointly with model behavior parameters using a bi-level calibration framework. The upper level seeks to identify the best model parameters using a genetic algorithm (GA). In this level, a statistically based calibration objective function is introduced to account for disaggregate form of ITS data in the calibration of microscopic traffic simulation models and, thus, accurately replicate dynamics of observed traffic conditions. Specifically, the Kolmogorov-Smirnov test is used to measure the "consistency" between the observed and simulated travel time distributions. The calibration of the OD matrix is performed in the lower level, where observed and simulated travel times are incorporated into the OD estimator for the calibration of the OD matrix. The interdependent relationship between travel time information and the OD matrix is formulated using a Extended Kalman filter (EKF) algorithm, which is selected to quantify the nonlinear dependence of the simulation results (travel time) on the OD matrix. The two test sites are from an urban arterial and a freeway in Houston, Texas. The VISSIM model was used to evaluate the proposed methodologies. It was found that that the accuracy of the calibration can be improved by using disaggregated data and by considering both driver behavior parameters and demand.Item A study of capacitor array calibration for a successive approximation analog-to-digital converter(2013-05) Ma, Ji, active 2013; Sun, NanAnalog-to-digital converters (ADCs) are driven by rapid development of mobile communication systems to have higher speed, higher resolution and lower power consumption. Among multiple ADC architectures, successive approximation (SAR) ADCs attract great attention in mixed-signal design community recently. It is due to the fact that they do not contain amplification components and the digital logics are scaling friendly. Therefore, it is easier to design a SAR ADC with smaller component size in advanced technology than other ADC architectures, which decreases the power consumption and increases the speed of the circuit. However, capacitor mismatch limits the minimum size of unit capacitors which could be used for a SAR ADC with more than 10 bit resolution. Large capacitor both limits conversion speed and increases switching power. In this design project, a novel switching scheme and a novel calibration method are adopted to overcome the capacitor mismatch constraint. The switching scheme uses monotonic switching in a SAR ADC to gain one extra bit, and switches a dummy capacitor between the common mode voltage level (Vcm) and the ground (gnd) to obtain another extra bit. To keep the resolution constant, the capacitor number is reduced by two. The calibration method extracts missing code width to estimate the actual value of capacitors. The missing code extraction is accomplished by detecting metastable state of a comparator, forcing the current bit value and using less significant bits to measure the actual capacitor value. Dither method is adopted to improve calibration accuracy. Behavior model simulation is provided to verify the effectiveness of the calibration method. A circuit design of a 12 bit ADC and the simulation for schematic design is presented in this report.Item The Method of Manufactured Universes for Testing Uncertainty Quantification Methods(2011-02-22) Stripling, Hayes FranklinThe Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which "experimental" data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented for this research manufactures a particle-transport "universe," models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new "experiments" within the manufactured reality. To test further the responses of these UQ methods, we conduct exercises with "experimental" replicates, "measurement" error, and choices of physical inputs that reduce the accuracy of the diffusion model's approximation of our manufactured laws. Our first application of MMU was rich in areas for exploration and highly informative. In the case of the Gaussian process code, we found that the fundamental statistical formulation was not appropriate for our functional data, but that the code allows a knowledgable user to vary parameters within this formulation to tailor its behavior for a specific problem. The Bayesian MARS formulation was a more natural emulator given our manufactured laws, and we used the MMU framework to develop further a calibration method and to characterize the diffusion model discrepancy. Overall, we conclude that an MMU exercise with a properly designed universe (that is, one that is an adequate representation of some real-world problem) will provide the modeler with an added understanding of the interaction between a given UQ method and his/her more complex problem of interest. The modeler can then apply this added understanding and make more informed predictive statements.Item Vision based navigation system for autonomous proximity operations: an experimental and analytical study(Texas A&M University, 2005-02-17) Du, Ju-YoungThis dissertation presents an experimental and analytical study of the Vision Based Navigation system (VisNav). VisNav is a novel intelligent optical sensor system invented by Texas A&M University recently for autonomous proximity operations. This dissertation is focused on system calibration techniques and navigation algorithms. This dissertation is composed of four parts. First, the fundamental hardware and software design configuration of the VisNav system is introduced. Second, system calibration techniques are discussed that should enable an accurate VisNav system application, as well as characterization of errors. Third, a new six degree-of-freedom navigation algorithm based on the Gaussian Least Squares Differential Correction is presented that provides a geometrical best position and attitude estimates through batch iterations. Finally, a dynamic state estimation algorithm utilizing the Extended Kalman Filter (EKF) is developed that recursively estimates position, attitude, linear velocities, and angular rates. Moreover, an approach for integration of VisNav measurements with those made by an Inertial Measuring Unit (IMU) is derived. This novel VisNav/IMU integration technique is shown to significantly improve the navigation accuracy and guarantee the robustness of the navigation system in the event of occasional dropout of VisNav data.