Browsing by Subject "Monte Carlo simulation"
Now showing 1 - 13 of 13
Results Per Page
Sort Options
Item An Analysis Tool for Flight Dynamics Monte Carlo Simulations(2011-05-20) Restrepo, Carolina 1982-Spacecraft design is inherently difficult due to the nonlinearity of the systems involved as well as the expense of testing hardware in a realistic environment. The number and cost of flight tests can be reduced by performing extensive simulation and analysis work to understand vehicle operating limits and identify circumstances that lead to mission failure. A Monte Carlo simulation approach that varies a wide range of physical parameters is typically used to generate thousands of test cases. Currently, the data analysis process for a fully integrated spacecraft is mostly performed manually on a case-by-case basis, often requiring several analysts to write additional scripts in order to sort through the large data sets. There is no single method that can be used to identify these complex variable interactions in a reliable and timely manner as well as be applied to a wide range of flight dynamics problems. This dissertation investigates the feasibility of a unified, general approach to the process of analyzing flight dynamics Monte Carlo data. The main contribution of this work is the development of a systematic approach to finding and ranking the most influential variables and combinations of variables for a given system failure. Specifically, a practical and interactive analysis tool that uses tractable pattern recognition methods to automate the analysis process has been developed. The analysis tool has two main parts: the analysis of individual influential variables and the analysis of influential combinations of variables. This dissertation describes in detail the two main algorithms used: kernel density estimation and nearest neighbors. Both are non-parametric density estimation methods that are used to analyze hundreds of variables and combinations thereof to provide an analyst with insightful information about the potential cause for a specific system failure. Examples of dynamical systems analysis tasks using the tool are provided.Item An assessment of the system costs and operational benefits of vehicle-to-grid schemes(2013-12) Harris, Chioke Bem; Webber, Michael E., 1971-With the emerging nationwide availability of plug-in electric vehicles (PEVs) at prices attainable for many consumers, electric utilities, system operators, and researchers have been investigating the impact of this new source of electricity demand. The presence of PEVs on the electric grid might offer benefits equivalent to dedicated utility-scale energy storage systems by leveraging vehicles' grid-connected energy storage through vehicle-to-grid (V2G) enabled infrastructure. Existing research, however, has not effectively examined the interactions between PEVs and the electric grid in a V2G system. To address these shortcomings in the literature, longitudinal vehicle travel data are first used to identify patterns in vehicle use. This analysis showed that vehicle use patterns are distinctly different between weekends and weekdays, seasonal interactions between vehicle charging, electric load, and wind generation might be important, and that vehicle charging might increase already high peak summer electric load in Texas. Subsequent simulations of PEV charging were performed, which revealed that unscheduled charging would increase summer peak load in Texas by approximately 1\%, and that uncertainty that arises from unscheduled charging would require only limited increases in frequency regulation procurements. To assess the market potential for the implementation of a V2G system that provides frequency regulation ancillary services, and might be able to provide financial incentives to participating PEV owners, a two-stage stochastic programming formulation of a V2G system operator was created. In addition to assessing the market potential for a V2G system, the model was also designed to determine the effect of the market power of the V2G system operator on prices for frequency regulation, the effect of uncertainty in real-time vehicle availability and state-of-charge on the aggregator's ability to provide regulation services, and the effect of different vehicle characteristics on revenues. Results from this model showed that the V2G system operator could generate revenue from participation in the frequency regulation market in Texas, even when subject to the uncertainty in real-time vehicle use. The model also showed that the V2G system operator would have a significant impact on prices, and thus as the number of PEVs participating in a V2G program in a given region increased, per-vehicle revenues, and thus compensation provided to vehicle owners, would decline dramatically. From these estimated payments to PEV owners, the decision to participate in a V2G program was analyzed. The balance between the estimated payments to PEV owners for participating in a V2G program and the increased probability of being left with a depleted battery as a result of V2G operations indicate that an owner of a range-limited battery electric vehicle (BEV) would probably not be a viable candidate for joining a V2G program, while a plug-in hybrid electric vehicle (PHEV) owner might find a V2G program worthwhile. Even for a PHEV owner, however, compensation for participating in a V2G program will provide limited incentive to join.Item Cholesterol-induced domain formation in multi-component lipid membranes(Texas Tech University, 2006-12) Ali, Rejwan; Huang, Juyang; Quitevis, Edward L.; Cheng, Kelvin K.; Lichti, Roger L.A cholesterol oxidase (COD) enzyme reaction assay has been developed to measure the chemical potential of cholesterol in various PC/cholesterol bilayers, and the result was compared with the predictions from four major lipid-cholesterol interaction models. In PC bilayers with chains containing single cis double bonds, the chemical potential of cholesterol displays vertical jumps, indicating large-scale superlattice formation, at cholesterol mole fractions of 0.15, 0.25, 0.40, .50 and 0.57, and peaks at the cholesterol maximum solubility limit in the bilayers. No such jump below = 0.50 was observed in PC bilayers with all saturated chains or with chains containing 3 cis double bonds. PC with trans double bond showed similar mixing behavior to PC with cis double bond. The result provided solid supportive evidences for the cholesterol chemical potential predicted from the Monte Carlo simulations based on the Umbrella model. Interestingly, the cholesterol maximum solubility shifted to lower when cholesterol was substituted by ceramide, a molecule having similar small headgroup like cholesterol. The result supported an earlier speculation of the Umbrella model. In another study, we have used lattice model Monte Carlo simulations to reproduce experimental phase boundaries of DOPC/DSPC/Cholesterol obtained by Feigenson group in Cornell University through multiple experimental techniques. A new computational technique, named the "Composition Evaluation Method", which is about 10 ~ 30 times more efficient in determining phase boundaries comparing to the traditional free energy calculation, has been implemented to determine the compositions of the coexisting phases. We found that pairwise interactions can reproduce the experimental critical point as well as the slope of tie lines, but not the compositions of the coexisting phases.Item Depth resolved diffuse reflectance spectroscopy(2015-05) Hennessy, Richard J.; Markey, Mia Kathleen; Tunnell, James W.This dissertation focuses on the development of computational models and algorithms related to diffuse reflectance spectroscopy. Specifically, this work aims to advance diffuse reflectance spectroscopy to a technique that is capable of measuring depth dependent properties in tissue. First, we introduce the Monte Carlo lookup table (MCLUT) method for extracting optical properties from diffuse reflectance spectra. Next, we extend this method to a two-layer tissue geometry so that it can extract depth dependent properties in tissue. We then develop a computational model that relates photon sampling depth to optical properties and probe geometry. This model can be used to aid in design of application specific diffuse reflectance probes. In order to provide justification for using a two-layer model for extracting tissue properties, we show that the use of a one-layer model can lead to significant errors in the extracted optical properties. Lastly, we use our two-layer MCLUT model and a probe that was designed based on our sampling depth model to extract tissue properties from the skin of 80 subjects at 5 anatomical locations. The results agree with previously published values for skin properties and show that can diffuse reflectance spectroscopy can be used to measured depth dependent properties in tissue.Item Economic valuation : where is the value of the Pirsaat Project from?(2007-12) Guo, Qintao; Jablonowski, Christopher J.The value of an E&P project comes from the cash flows it produces. These cash flows are subject to the uncertainty of input parameters and are also affected by contingent decisions that change the course of the project. Three project valuation methods, discounted cash flow (DCF) method, Monte Carlo simulation and real option valuation (ROV) method, are utilized to evaluate a specific E&P project in the Pirsaat oil field in Azerbaijan. The DCF method and Monte Carlo simulation both follow predetermined paths, thus ignoring the value of managerial flexibility, also called options. As an extension of DCF, ROV can highlight the option values inherent in the project. Therefore, ROV provides more insights about the project value. However, there is no widely accepted ROV approach today. The integrated approach is adopted in this thesis, as it treats all sources of uncertainties as market uncertainty and technical uncertainty separately, thus more robust.Item Highway case study investigation and sensitivity testing using the Project Evaluation Toolkit(2011-08) Fagnant, Daniel James; Kockelman, Kara; Xie, ChiAs transportation funding becomes increasingly constrained, it is imperative that decision makers invest precious resources wisely and effectively. Transportation planners need effective tools for anticipating outcomes (or ranges of outcomes) in order to select preferred project alternatives and evaluate funding options for competing projects. To this end, this thesis work describes multiple applications of a new Project Evaluation Toolkit (PET) for highway project assessment. The PET itself was developed over a two-year period by the thesis author, in conjunction with Dr. Kara Kockelman, Dr. Chi Xie, and some support by others, as described in Kockelman et al. (2010) and the PET Users Guidebook (Fagnant et al. 2011). Using just link-level traffic counts (and other parameter values, if users wish to change defaults), PET quickly estimates how transportation network changes impact traveler welfare (consisting of travel times and operating costs), travel time reliability, crashes, and emissions. Summary measures (such as net present values and benefit-cost ratios) are developed over multi-year/long-term horizons to quantify the relative merit of project scenarios. This thesis emphasizes three key topics: a background and description of PET, case study evaluations using PET, and sensitivity analysis (under uncertain inputs) using PET. The first section includes a discussion of PET’s purpose, operation and theoretical behavior, much of which is taken from Fagnant et al. (2010). The second section offers case studies on capacity expansion, road pricing, demand management, shoulder lane use, speed harmonization, incident management and work zone timing along key links in the Austin, Texas network. The final section conducts extensive sensitivity testing of results for two competing upgrade scenarios (one tolled, the other not); the work examines how input variations impact PET outputs over hundreds of model applications. Taken together, these investigations highlight PET’s capabilities while identifying potential shortcomings. Such findings allow transportation planners to better appreciate the impacts that various projects can have on the traveling public, how project evaluation may best be tackled, and how they may use PET to anticipate impacts of projects they may be considering, before embarking on more detailed analyses and finalizing investment decisions.Item Modeling Study of Proposed Field Calibration Source Using K-40 Source and High-Z Targets for Sodium Iodide Detector(2012-10-30) Rogers, Jeremy 1987-The Department of Energy (DOE) has ruled that all sealed radioactive sources, even those considered exempt under Nuclear Regulatory Commission regulations, are subject to radioactive material controls. However, sources based on the primordial isotope potassium-40 (40K) are not subject to these restrictions. Potassium-40?s beta spectrum and 1460.8 keV gamma ray can be used to induce K-shell fluorescence x rays in high-Z metals between 60 and 80 keV. A gamma ray calibration source is thus proposed that uses potassium chloride salt and a high-Z metal to create a two-point calibration for a sodium iodide field gamma spectroscopy instrument. The calibration source was designed in collaboration with Sandia National Laboratory using the Monte Carlo N-Particle eXtended (MCNPX) transport code. The x ray production was maximized while attempting to preserve the detector system?s sensitivity to external sources by minimizing the count rate and shielding effect of the calibration source. Since the source is intended to be semi-permanently fixed to the detector, the weight of the calibration source was also a design factor. Two methods of x-ray production were explored. First, a thin high-Z layer (HZL) was interposed between the detector and the potassium chloride-urethane source matrix. Second, bismuth metal powder was homogeneously mixed with a urethane binding agent to form a potassium chloride-bismuth matrix (KBM). The two methods were directly compared using a series of simulations, including their x ray peak strengths, pulse-height spectral characteristics, and response to a simulated background environment. The bismuth-based source was selected as the development model because it is cheap, nontoxic, and outperforms the high-Z layer method in simulation. The overall performance for the bismuth-based source was significantly improved by splitting the calibration source longitudinally into two halves and placing them on either side of the detector. The performance was improved further by removing the binding agent and simulating a homogeneous mixture of potassium chloride and bismuth powder in a 0.1 cm plastic casing. The split plastic-encased potassium chloride-bismuth matrix would serve as a light, cheap, field calibration source that is not subject to DOE restrictions.Item Monte Carlo simulation of the Jovian plasma torus interaction with Io’s atmosphere and the resultant aurora during eclipse(2011-08) Moore, Christopher Hudson; Goldstein, David Benjamin, doctor of aeronautics; Varghese, Philip L.; Raman, Venkatramanan; Trafton, Laurence M.; Combi, Michael R.Io, the innermost Galilean satellite of Jupiter, exhibits a wide variety of complex phenomena such as interaction with Jupiter’s magnetosphere, volcanic activity, and a rarefied multi-species sublimating and condensing atmosphere with an ionosphere. Io’s orbital resonance with Jupiter and the other Galilean satellites produces intense tidal heating. This makes Io the most volcanically active body in the solar system with plumes that rise hundreds of kilometers above the surface. In the present work, the interaction of Io’s atmosphere with the Jovian plasma torus is simulated via the Direct Simulation Monte Carlo (DSMC) method and the aurora produced via electron-neutral excitation collisions is examined using electron transport Monte Carlo simulation. The electron-transport Monte Carlo simulation models the electron collisions with the neutral atmosphere and their transport along field lines as they sweep past Io, using a pre-computed steady atmosphere and magnetic field. As input to the Monte Carlo simulation, the neutral atmosphere was first modeled using prior 2D sunlit continuum simulations of Io’s atmosphere produced by others. In order to justify the use of a sunlit atmosphere for eclipse, 1D two-species (SO2 and a non-condensable) DSMC simulations of Io’s atmospheric dynamics during and immediately after eclipse were performed. It was found that the inclusion of a non-condensable species (SO or O2) leads to the formation of a diffusion layer which prevents rapid collapse. The degree to which the diffusion layer slowed the atmospheric collapse was found to be extremely sensitive to both the initial non-condensable mole fraction and the reaction (or sticking) probability on the surface of the “non-condensable”. Furthermore, upon egress, vertical stratification of the atmosphere occurred with the non-condensable species being lifted to higher altitudes by the rapid sublimation of SO2 as the surface warms. Simulated aurorae (specifically the [OI] 6300 Å and the S2, SO, and SO2 molecular band emission in the middle ultraviolet) show good agreement with observations of Io in eclipse and an attempt was made to use the simulations to constrain the upstream torus electron temperature and Io’s atmospheric composition, structure, and volcanic activity. It is found that the position of the bright [OI] 6300 Å wake spot relative to Io’s equator depends on the position of Io relative to the plasma torus’ equator and the asymmetric electron number flux that results. Using HST/STIS UV-Vis spectra, the upstream electron temperature is weakly constrained to be between 3 eV and 8 eV depending on the flux of a low energy (35 eV), non-thermal component of the plasma (more non-thermal flux requires lower thermal plasma temperatures to fit the spectrum). Furthermore, an upper limit of 5% of the thermal torus density (or 180 cm−3 based on the Galileo J0 plasma density at Io) is obtained for the low energy non-thermal component of the plasma. These limits are consistent with Galileo observations of the upstream torus temperature and estimates for the the non-thermal component. Finally, plume activity and S2 content during eclipse observations with HST/STIS were constrained by examining the emission intensity along the spatial axis of the aperture. During the August 1999 UV-Vis observations, the auroral simulations indicate that the large volcanoes Pele and Surt were inactive whereas Tvashtar was active and that Dazhbog and possibly Loki were also actively venting gas. The S2 content inferred for the large Pele-type plumes was between 5% (Tvashtar) and 30% (Loki, if active), consistent with prior observations (Spencer et al., 2000; Jessup et al., 2007). A 3D DSMC simulation of Io’s sublimation and sputtered atmosphere including photo- and plasma-chemistry was developed. In future work these atmospheric simulations will replace the continuum target atmosphere in the auroral model and thus enable a better match to the observed high altitude auroral emission. In the present work, the plasma interaction is modeled by a flux of ions and electrons which flow around and through Io’s atmosphere along pre-computed fields and interact with the neutral gas. A 3D DSMC simulation of Io’s atmosphere assuming a simple thermal model for the surface just prior to ingress into eclipse and uniform frost coverage has been performed in order to understand how Io’s general atmospheric dynamics are affected by the new plasma model with chemistry and sputtering. Sputtering was found to supply most of the nightside atmosphere (producing an SO2 column of ~5×1013 cm−2); however, the dense dayside sublimation atmosphere was found to block sputtering of the surface. The influence of the dynamic plasma pressure on the day-to-night circumplanetary flow was found to be quite substantial causing the day-to-night wind across the dawn terminator to flow slightly towards the equator. This results in a region of high density near the equator that extends far (~2000 km for the condensable species) onto the nightside across the dawn terminator. Thus, even without thermal lag due to rotation or variable surface frost, highly asymmetric equatorial column densities relative to the subsolar point are obtained. The non-condensable O2, which is a trace species on the dayside, is the dominant species on the nightside despite increased SO2 sputtering because the loss rate of O2 is slow. Finally, a very intriguing O2 flow feature was observed near the dusk terminator where the flow from the leading hemisphere (pushed by the plasma) meets the flow from the dayside trailing hemisphere. Since the O2 does not condense on the surface, it slowly convects towards the poles and then back onto the nightside, eventually to be dissociated or stripped away by the plasma.Item Monte Carlo studies of polymer chain solubility in water(2005-12) Lu, Ying, 1972-; Sanchez, Isaac C., 1941-Poly (Ethylene Oxide) (PEO, with a general formula (CH₂-CH₂-O)[subscript pi] ) is completely soluble in water at room temperature over an extremely wide molecular weight range and has been widely studied by experiment and theory. The objective of our work is to study the solubility behavior by the method of Monte Carlo simulation. The insertion factor lnB, which is equivalent to the infinite dilute Henry's Law Constant, is used to represent the solubility of various molecules in water. Our research started with simple fluid and aqueous solutions of small molecules including hard spheres, inert gases, hydrocarbons and dimethyl ether (DME, as a precursor for PEO). Solubility consists of a favorable energy term and an unfavorable entropy term. Against the common belief of entropy-dominating-hydrophobicity effect, it is actually the ability of the solute to interact with solvent (or the energetic factor) that dominates solubility. The solubility minimum appearing for both hydrophobic and hydrophilic solutes along the water coexistence curve is the result of competition between the favorable energy contribution and the unfavorable entropy contribution. Normal alkanes with carbon number from 1 to 20 have been modeled by LJ chains to study the solubility of non-polar polymer chains in water. Various constraints have been put on the LJ model to evaluate their effect on solubility. No significant difference was observed for LJ chain with or without fixed bond angles, but torsional interaction changed the chain solubility dramatically. The temperature and chain-length effect on chain solubility has been examined and it can be explained by the balancing between the intra-chain interaction and entropy penalty. By choosing the right torsional interaction parameters we may be able to reproduce by simulations the solubility minimum of normal alkanes at C₁₁. PEO was modeled by united atom chains with length up to 30. The most probable distance between two nearest ether oxygens in both vacuum and aqueous solutions matches the hydrogen bond length in bulk water. Hydrogen bonding plays an important role in the unique water solubility behavior of PEO since the water-PEO interaction effectively increases the total number of hydrogen bonds and results in a favorable change in energy. A trans-gauche-trans conformation along the O-C-C-O bonds does enable hydrogen bond formation between one water molecule and two nearest or next nearest ether oxygens. A helix structure is not required for the PEO to have favorable interactions with water. Two polymers with similar structure as PEO but are insoluble in water: Poly (methylene oxide) (PMO) and Poly (propylene oxide) (PPO) have been studied to compare with PEO. Their difference in structure from PEO, though slight, reduces the chance of hydrogen bond forming between water and chains so as to decrease the solubility.Item Net pay evaluation: a comparison of methods to estimate net pay and net-to-gross ratio using surrogate variables(2009-06-02) Bouffin, NicolasNet pay (NP) and net-to-gross ratio (NGR) are often crucial quantities to characterize a reservoir and assess the amount of hydrocarbons in place. Numerous methods in the industry have been developed to evaluate NP and NGR, depending on the intended purposes. These methods usually involve the use of cut-off values of one or more surrogate variables to discriminate non-reservoir from reservoir rocks. This study investigates statistical issues related to the selection of such cut-off values by considering the specific case of using porosity () as the surrogate. Four methods are applied to permeability-porosity datasets to estimate porosity cut-off values. All the methods assume that a permeability cut-off value has been previously determined and each method is based on minimizing the prediction error when particular assumptions are satisfied. The results show that delineating NP and evaluating NGR require different porosity cut-off values. In the case where porosity and the logarithm of permeability are joint normally distributed, NP delineation requires the use of the Y-on-X regression line to estimate the optimal porosity cut-off while the reduced major axis (RMA) line provides the optimal porosity cut-off value to evaluate NGR. Alternatives to RMA and regression lines are also investigated, such as discriminant analysis and a data-oriented method using a probabilistic analysis of the porosity-permeability crossplots. Joint normal datasets are generated to test the ability of the methods to predict accurately the optimal porosity cut-off value for sampled sub datasets. These different methods have been compared to one another on the basis of the bias, standard error and robustness of the estimates. A set of field data has been used from the Travis Peak formation to test the performance of the methods. The conclusions of the study have been confirmed when applied to field data: as long as the initial assumptions concerning the distribution of data are verified, it is recommended to use the Y-on-X regression line to delineate NP while either the RMA line or discriminant analysis should be used for evaluating NGR. In the case where the assumptions on data distribution are not verified, the quadrant method should be used.Item Reliability Evaluation of Composite Power Systems Including the Effects of Hurricanes(2011-02-22) Liu, YongAdverse weather such as hurricanes can significantly affect the reliability of composite power systems. Predicting the impact of hurricanes can help utilities for better preparedness and make appropriate restoration arrangements. In this dissertation, the impact of hurricanes on the reliability of composite power systems is investigated. Firstly, the impact of adverse weather on the long-term reliability of composite power systems is investigated by using Markov cut-set method. The Algorithms for the implementation is developed. Here, two-state weather model is used. An algorithm for sequential simulation is also developed to achieve the same goal. The results obtained by using the two methods are compared. The comparison shows that the analytical method can obtain comparable results and meantime it can be faster than the simulation method. Secondly, the impact of hurricanes on the short-term reliability of composite power systems is investigated. A fuzzy inference system is used to assess the failure rate increment of system components. Here, different methods are used to build two types of fuzzy inference systems. Considering the fact that hurricanes usually last only a few days, short-term minimal cut-set method is proposed to compute the time-specific system and nodal reliability indices of composite power systems. The implementation demonstrates that the proposed methodology is effective and efficient and is flexible in its applications. Thirdly, the impact of hurricanes on the short-term reliability of composite power systems including common-cause failures is investigated. Here, two methods are proposed to archive this goal. One of them uses a Bayesian network to alleviate the dimensionality problem of conditional probability method. Another method extends minimal cut-set method to accommodate common-cause failures. The implementation results obtained by using the two methods are compared and their discrepancy is analyzed. Finally, the proposed methods in this dissertation are also applicable to other applications in power systems.Item Reliability modeling for capital project decisions(2010-08) Poulassichidis, Antonios; Ambler, Tony; McCann, R. BruceExploration and Production (E&P) project costs within the oil industry are continuously increasing reflecting a reality of more harsh environments, remote locations with minimal existing infrastructure and cost increases for materials and skilled resources. The significant capital expenditures translate to a number of projects either for new or revamped production facilities. Successful project completion requires a series of correct decisions throughout the project life-cycle namely design, construction, operations, maintenance and decommissioning. Using a Reliability, Availability and Maintainability (RAM) model as part of the project decision process is an E&P industry best practice that recently gained acceptance in Hess Corporation. This paper presents the RAM methodology and the gains from its application in a capital project.Item Uncertainty propagation and conjunction assessment for resident space objects(2015-12) Vittaldev, Vivek; Russell, Ryan Paul, 1976-; Erwin, Richard S; Akella, Maruthi R; Bettadpur, Srinivas V; Humphreys, Todd EPresently, the catalog of Resident Space Objects (RSOs) in Earth orbit tracked by the U.S. Space Surveillance Network (SSN) is greater than 21,000 objects. The size of the catalog continues to grow due to an increasing number of launches, improved tracking capabilities, and in some cases, collisions. Simply propagating the states of these RSOs is a computational burden, while additionally propagating the uncertainty distributions of the RSOs and computing collision probabilities increases the computational burden by at least an order of magnitude. Tools are developed that propagate the uncertainty of RSOs with Gaussian initial uncertainty from epoch until a close approach. The number of possible elements in the form of a precomputed library, in a Gaussian Mixture Model (GMM) has been increased and the strategy for multivariate problems has been formalized. The accuracy of a GMM is increased by propagating each element by a Polynomial Chaos Expansion (PCE). Both techniques reduce the number of function evaluations required for uncertainty propagation and result in a sliding scale where accuracy can be improved at the cost of increased computation time. A parallel implementation of the accurate benchmark Monte Carlo (MC) technique has been developed on the Graphics Processing Unit (GPU) that is capable of using samples from any uncertainty propagation technique to compute the collision probability. The GPU MC tool delivers up to two orders of magnitude speedups compared to a serial CPU implementation. Finally, a CPU implementation of the collision probability computations using Cartesian coordinates requires orders of magnitude fewer function evaluations compared to a MC run. Fast computation of the inherent nonlinear growth of the uncertainty distribution in orbital mechanics and accurately computing the collision probability is essential for maintaining a future space catalog and for preventing an uncontrolled growth in the debris population. The uncertainty propagation and collision probability computation methods and algorithms developed here are capable of running on personal workstations and stand to benefit users ranging from national space surveillance agencies to private satellite operators. The developed techniques are also applicable for many general uncertainty quantification and nonlinear estimation problems.