Browsing by Subject "calibration"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item A Study of Predicted Energy Savings and Sensitivity Analysis(2013-07-22) Yang, YingThe sensitivity of the important inputs and the savings prediction function reliability for the WinAM 4.3 software is studied in this research. WinAM was developed by the Continuous Commissioning (CC) group in the Energy Systems Laboratory at Texas A&M University. For the sensitivity analysis task, fourteen inputs are studied by adjusting one input at a time within ? 30% compared with its baseline. The Single Duct Variable Air Volume (SDVAV) system with and without the economizer has been applied to the square zone model. Mean Bias Error (MBE) and Influence Coefficient (IC) have been selected as the statistical methods to analyze the outputs that are obtained from WinAM 4.3. For the saving prediction reliability analysis task, eleven Continuous Commissioning projects have been selected. After reviewing each project, seven of the eleven have been chosen. The measured energy consumption data for the seven projects is compared with the simulated energy consumption data that has been obtained from WinAM 4.3. Normalization Mean Bias Error (NMBE) and Coefficient of Variation of the Root Mean Squared Error (CV (RMSE)) statistical methods have been used to analyze the results from real measured data and simulated data. Highly sensitive parameters for each energy resource of the system with the economizer and the system without the economizer have been generated in the sensitivity analysis task. The main result of the savings prediction reliability analysis is that calibration improves the model?s quality. It also improves the predicted energy savings results compared with the results generated from the uncalibrated model.Item Applying Calibration to Improve Uncertainty Assessment(2013-08-02) Fondren, Mark EdwardUncertainty has a large effect on projects in the oil and gas industry, because most aspects of project evaluation rely on estimates. Industry routinely underestimates uncertainty, often significantly. The tendency to underestimate uncertainty is nearly universal. The cost associated with underestimating uncertainty, or overconfidence, can be substantial. Studies have shown that moderate overconfidence and optimism can result in expected portfolio disappointment of more than 30%. It has been shown that uncertainty can be assessed more reliably through look-backs and calibration, i.e., comparing actual results to probabilistic predictions over time. While many recognize the importance of look-backs, calibration is seldom practiced in industry. I believe a primary reason for this is lack of systematic processes and software for calibration. The primary development of my research is a database application that provides a way to track probabilistic estimates and their reliability over time. The Brier score and its components, mainly calibration, are used for evaluating reliability. The system is general in the types of estimates and forecasts that it can monitor, including production, reserves, time, costs, and even quarterly earnings. Forecasts may be assessed visually, using calibration charts, and quantitatively, using the Brier score. The calibration information can be used to modify probabilistic estimation and forecasting processes as needed to be more reliable. Historical data may be used to externally adjust future forecasts so they are better calibrated. Three experiments with historical data sets of predicted vs. actual quantities, e.g., drilling costs and reserves, are presented and demonstrate that external adjustment of probabilistic forecasts improve future estimates. Consistent application of this approach and database application over time should improve probabilistic forecasts, resulting in improved company and industry performance.Item Developing a methodology to account for commercial motor vehicles using microscopic traffic simulation models(Texas A&M University, 2004-09-30) Schultz, Grant GeorgeThe collection and interpretation of data is a critical component of traffic and transportation engineering used to establish baseline performance measures and to forecast future conditions. One important source of traffic data is commercial motor vehicle (CMV) weight and classification data used as input to critical tasks in transportation design, operations, and planning. The evolution of Intelligent Transportation System (ITS) technologies has been providing transportation engineers and planners with an increased availability of CMV data. The primary sources of these data are automatic vehicle classification (AVC) and weigh-in-motion (WIM). Microscopic traffic simulation models have been used extensively to model the dynamic and stochastic nature of transportation systems including vehicle composition. One aspect of effective microscopic traffic simulation models that has received increased attention in recent years is the calibration of these models, which has traditionally been concerned with identifying the "best" parameter set from a range of acceptable values. Recent research has begun the process of automating the calibration process in an effort to accurately reflect the components of the transportation system being analyzed. The objective of this research is to develop a methodology in which the effects of CMVs can be included in the calibration of microscopic traffic simulation models. The research examines the ITS data available on weight and operating characteristics of CMVs and incorporates this data in the calibration of microscopic traffic simulation models. The research develops a methodology to model CMVs using microscopic traffic simulation models and then utilizes the output of these models to generate the data necessary to quantify the impacts of CMVs on infrastructure, travel time, and emissions. The research uses advanced statistical tools including principal component analysis (PCA) and recursive partitioning to identify relationships between data collection sites (i.e., WIM, AVC) such that the data collected at WIM sites can be utilized to estimate weight and length distributions at AVC sites. The research also examines methodologies to include the distribution or measures of central tendency and dispersion (i.e., mean, variance) into the calibration process. The approach is applied using the CORSIM model and calibrated utilizing an automated genetic algorithm methodology.Item Digitally-Assisted Mixed-Signal Wideband Compressive Sensing(2012-07-16) Yu, ZhuizhuanDigitizing wideband signals requires very demanding analog-to-digital conversion (ADC) speed and resolution specifications. In this dissertation, a mixed-signal parallel compressive sensing system is proposed to realize the sensing of wideband sparse signals at sub-Nqyuist rate by exploiting the signal sparsity. The mixed-signal compressive sensing is realized with a parallel segmented compressive sensing (PSCS) front-end, which not only can filter out the harmonic spurs that leak from the local random generator, but also provides a tradeoff between the sampling rate and the system complexity such that a practical hardware implementation is possible. Moreover, the signal randomization in the system is able to spread the spurious energy due to ADC nonlinearity along the signal bandwidth rather than concentrate on a few frequencies as it is the case for a conventional ADC. This important new property relaxes the ADC SFDR requirement when sensing frequency-domain sparse signals. The mixed-signal compressive sensing system performance is greatly impacted by the accuracy of analog circuit components, especially with the scaling of CMOS technology. In this dissertation, the effect of the circuit imperfection in the mixed-signal compressive sensing system based on the PSCS front-end is investigated in detail, such as the finite settling time, the timing uncertainty and so on. An iterative background calibration algorithm based on LMS (Least Mean Square) is proposed, which is shown to be able to effectively calibrate the error due to the circuit nonideal factors. A low-speed prototype built with off-the-shelf components is presented. The prototype is able to sense sparse analog signals with up to 4 percent sparsity at 32 percent of the Nqyuist rate. Many practical constraints that arose during building the prototype such as circuit nonidealities are addressed in detail, which provides good insights for a future high-frequency integrated circuit implementation. Based on that, a high-frequency sub-Nyquist rate receiver exploiting the parallel compressive sensing is designed and fabricated with IBM90nm CMOS technology, and measurement results are presented to show the capability of wideband compressive sensing at sub-Nyquist rate. To the best of our knowledge, this prototype is the first reported integrated chip for wideband mixed-signal compressive sensing. The proposed prototype achieves 7 bits ENOB and 3 GS/s equivalent sampling rate in simulation assuming a 0.5 ps state-of-art jitter variance, whose FOM beats the FOM of the high speed state-of-the-art Nyquist ADCs by 2-3 times. The proposed mixed-signal compressive sensing system can be applied in various fields. In particular, its applications for wideband spectrum sensing for cognitive radios and spectrum analysis in RF tests are discussed in this work.