Browsing by Subject "Analytical modeling"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Efficient modeling of soft error vulnerability in microprocessors(2012-05) Nair, Arun Arvind; John, Lizy Kurian; Eeckhout, Lieven; Erez, Mattan; Touba, Nur; Swartzlander, Earl E.; Bryant, Michael D.Reliability has emerged as a first class design concern, as a result of an exponential increase in the number of transistors on the chip, and lowering of operating and threshold voltages with each new process generation. Radiation-induced transient faults are a significant source of soft errors in current and future process generations. Techniques to mitigate their effect come at a significant cost of area, power, performance, and design effort. Architectural Vulnerability Factor (AVF) modeling has been proposed to easily estimate the processor's soft error rates, and to enable the designers to make appropriate cost/reliability trade-offs early in the design cycle. Using cycle-accurate microarchitectural or logic gate-level simulations, AVF modeling captures the masking effect of program execution on the visibility of soft errors at the output. AVF modeling is used to identify structures in the processor that have the highest contribution to the overall Soft Error Rate (SER) while running typical workloads, and used to guide the design of SER mitigation mechanisms. The precise mechanisms of interaction between the workload and the microarchitecture that together determine the overall AVF is not well studied in literature, beyond qualitative analyses. Consequently, there is no known methodology for ensuring that the workload suite used for AVF modeling offers sufficient SER coverage. Additionally, owing to the lack of an intuitive model, AVF modeling is reliant on detailed microarchitectural simulations for understanding the impact of scaling processor structures, or design space exploration studies. Microarchitectural simulations are time-consuming, and do not easily provide insight into the mechanisms of interactions between the workload and the microarchitecture to determine AVF, beyond aggregate statistics. These aforementioned challenges are addressed in this dissertation by developing two methodologies. First, beginning with a systematic analysis of the factors affecting the occupancy of corruptible state in a processor, a methodology is developed that generates a synthetic workload for a given microarchitecture such that the SER is maximized. As it is impossible for every bit in the processor to simultaneously contain corruptible state, the worst-case realizable SER while running a workload is less than the sum of their circuit-level fault rates. The knowledge of the worst-case SER enables efficient design trade-offs by allowing the architect to validate the coverage of the workload suite and select an appropriate design point, and to identify structures that may potentially have high contribution to SER. The methodology induces 1.4X higher SER in the core as compared to the highest SER induced by SPEC CPU2006 and MiBench programs. Second, a first-order analytical model is proposed, which is developed from the first principles of out-of-order superscalar execution that models the AVF induced by a workload in microarchitectural structures, using inexpensive profiling. The central component of this model is a methodology to estimate the occupancy of correct-path state in various structures in the core. Owing to its construction, the model provides fundamental insight into the precise mechanism of interaction between the workload and the microarchitecture to determine AVF. The model is used to cheaply perform sizing studies for structures in the core, design space exploration, and workload characterization for AVF. The model is used to quantitatively explain results that may appear counter-intuitive from aggregate performance metrics. The Mean Absolute Error in determining AVF of a 4-wide out-of-order superscalar processor using model is less than 7% for each structure, and the Normalized Mean Square Error for determining overall SER is 9.0%, as compared to cycle-accurate microarchitectural simulation.Item Using analytical and numerical modeling to assess deep groundwater monitoring parameters at carbon capture, utilization, and storage sites(2013-12) Porse, Sean Laurids; Young, Michael H.Carbon Dioxide (CO₂) Enhanced Oil Recovery (EOR) is becoming an important bridge to commercialize geologic sequestration (GS) in order to help reduce anthropogenic CO₂ emissions. Current U.S. environmental regulations require operators to monitor operational and groundwater aquifer changes within permitted bounds, depending on the injection activity type. We view one goal of monitoring as maximizing the chances of detecting adverse fluid migration signals into overlying aquifers. To maximize these chances, it is important to: (1) understand the limitations of monitoring pressure versus geochemistry in deep aquifers (i.e., >450 m) using analytical and numerical models, (2) conduct sensitivity analyses of specific model parameters to support monitoring design conclusions, and (3) compare the breakthrough time (in years) for pressure and geochemistry signals. Pressure response was assessed using an analytical model, derived from Darcy's law, which solves for diffusivity in radial coordinates and the fluid migration rate. Aqueous geochemistry response was assessed using the numerical, single-phase, reactive solute transport program PHAST that solves the advection-reaction-dispersion equation for 2-D transport. The conceptual modeling domain for both approaches included a fault that allows vertical fluid migration and one monitoring well, completed through a series of alternating confining units and distinct (brine) aquifers overlying a depleted oil reservoir, as observed in the Texas Gulf Coast, USA. Physical and operational data, including lithology, formation hydraulic parameters, and water chemistry obtained from field samples were used as input data. Uncertainty evaluation was conducted with a Monte Carlo approach by sampling the fault width (normal distribution) via Latin Hypercube and the hydraulic conductivity of each formation from a beta distribution of field data. Each model ran for 100 realizations over a 100 year modeling period. Monitoring well location was varied spatially and vertically with respect to the fault to assess arrival times of pressure signals and changes in geochemical parameters. Results indicate that the pressure-based, subsurface monitoring system provided higher probabilities of fluid migration detection in all candidate monitoring formations, especially those closest (i.e., 1300 m depth) to the possible fluid migration source. For aqueous geochemistry monitoring, formations with higher permeabilities (i.e., greater than 4 x 10⁻¹³ m²) provided better spatial distributions of chemical changes, but these changes never preceded pressure signal breakthrough, and in some cases were delayed by decades when compared to pressure. Differences in signal breakthrough indicate that pressure monitoring is a better choice for early migration signal detection. However, both pressure and geochemical parameters should be considered as part of an integrated monitoring program on a site-specific basis, depending on regulatory requirements for longer term (i.e., >50 years) monitoring. By assessing the probability of fluid migration detection using these monitoring techniques at this field site, it may be possible to extrapolate the results (or observations) to other CCUS fields with different geological environments.