Browsing by Subject "Monte Carlo"
Now showing 1 - 20 of 31
Results Per Page
Sort Options
Item A fourth-order symplectic finite-difference time-domain (FDTD) method for light scattering and a 3D Monte Carlo code for radiative transfer in scattering systems(2009-06-02) Zhai, PengwangWhen the finite-difference time-domain (FDTD) method is applied to light scattering computations, the far fields can be obtained by either a volume integration method, or a surface integration method. In the first study, we investigate the errors associated with the two near-to-far field transform methods. For a scatterer with a small refractive index, the surface approach is more accurate than its volume counterpart for computing the phase functions and extinction efficiencies; however, the volume integral approach is more accurate for computing other scattering matrix elements. If a large refractive index is involved, the results computed from the volume integration method become less accurate, whereas the surface method still retains the same order of accuracy as in the situation of a small refractive index. In my second study, a fourth order symplectic FDTD method is applied to the problem of light scattering by small particles. The total-field/ scattered-field (TF/SF) technique is generalized for providing the incident wave source conditions in the symplectic FDTD (SFDTD) scheme. Numerical examples demonstrate that the fourthorder symplectic FDTD scheme substantially improves the precision of the near field calculation. The major shortcoming of the fourth-order SFDTD scheme is that it requires more computer CPU time than the conventional second-order FDTD scheme if the same grid size is used. My third study is on multiple scattering theory. We develop a 3D Monte Carlo code for the solving vector radiative transfer equation, which is the equation governing the radiation field in a multiple scattering medium. The impulse-response relation for a plane-parallel scattering medium is studied using our 3D Monte Carlo code. For a collimated light beam source, the angular radiance distribution has a dark region as the detector moves away from the incident point. The dark region is gradually filled as multiple scattering increases. We have also studied the effects of the finite size of clouds. Extending the finite size of clouds to infinite layers leads to underestimating the reflected radiance in the multiple scattering region, especially for scattering angles around 90 degrees. The results have important applications in the field of remote sensing.Item A revised model for radiation dosimetry in the human gastrointestinal tract(Texas A&M University, 2004-09-30) Bhuiyan, Md. Nasir UddinA new model for an adult human gastrointestinal tract (GIT) has been developed for use in internal dose estimations to the wall of the GIT and to the other organs and tissues of the body from radionuclides deposited in the lumenal contents of the five sections of the GIT. These sections were the esophagus, stomach, small intestine, upper large intestine, and the lower large intestine. The wall of each section was separated from its lumenal contents. Each wall was divided into many small regions so that the histologic and radiosensitive variations of the tissues across the wall could be distinguished. The characteristic parameters were determined based on the newest information available in the literature. Each of these sections except the stomach was subdivided into multiple subsections to include the spatiotemporal variations in the shape and characteristic parameters. This new GIT was integrated into an anthropomorphic phantom representing both an adult male and a larger-than-average adult female. The current phantom contains 14 different types of tissue. This phantom was coupled with the MCNP 4C Monte Carlo simulation package. The initial design and coding of the phantom and the Monte Carlo treatment employed in this study were validated using the results obtained by Cristy and Eckerman (1987). The code was used for calculating specific absorbed fractions (SAFs) in various organs and radiosensitive tissues from uniformly distributed sources of fifteen monoenergetic photons and electrons, 10 keV - 4 MeV, in the lumenal contents of the five sections of the GIT. The present studies showed that the average photon SAFs to the walls were significantly different from that to the radiosensitive cells (stem cells) for the energies below 50 keV. Above 50 keV, the photon SAFs were found to be almost constant across the walls. The electron SAF at the depth of the stem cells was a small fraction of the SAF routinely estimated at the contents-mucus interface. Electron studies showed that the ?self-dose? for the energies below 300 keV and the ?cross-dose? below 2 MeV were only from bremsstrahlung and fluorescent radiations at the depth of the stem cells and were not important.Item Advanced modeling for end-of-the-roadmap CMOS and potential beyond-CMOS applications(2016-05) Crum, Dax Michael; Register, Leonard F.; Banerjee, Sanjay K; Tutuc, Emanuel; Lee, Jack C; MacDonald, Allan HEnd-of-the-roadmap CMOS devices are explored via particle-based ensemble semi-classical Monte Carlo (MC) methods employing quantum corrections (QCs) to address quantum confinement and degenerate carrier populations. The significance of such QCs is illustrated through simulation of n-channel III-V and Si FinFETs. Original contributions include our treatment of far-from-equilibrium degenerate statistics and QC-based modeling of surface-roughness scattering, as well as considering quantum-confined phonon and ionized-impurity scattering in 3D. Typical MC simulations approximate degenerate carrier populations as Fermi distributions to model the Pauli-blocking (PB) of scattering to occupied final states. To allow for increasingly far-from-equilibrium non-Fermi carrier distributions in ultra-scaled and III-V devices, we instead generate the final-state occupation probabilities used for PB by sampling the local carrier populations as a function of energy and energy valley. This process is aided by the use of fractional carriers or sub-carriers, which minimizes classical carrier-carrier scattering. Quantum confinement effects are addressed through quantum-correction potentials (QCPs) generated from coupled Schrödinger-Poisson solvers, as commonly done. However, we use our valley- and orientation-dependent QCPs not just to redistribute carriers in real space, or even among energy valleys, but also to calculate confinement-dependent phonon, ionized-impurity, and surface-roughness scattering rates. Collectively, these quantum effects can substantially reduce and even eliminate otherwise expected benefits of considered InGaAs FinFETs over otherwise identical Si FinFETs, despite higher thermal velocities in InGaAs. Beyond-CMOS device concepts are also being considered for future applications. Thin-film sub-5 nm magnetic skyrmions constitute an ultimate scaling alternative for beyond-CMOS data storage technologies. These robust non-collinear spin-textures can be moved and manipulated by spin-polarized or non-spin-polarized electrical currents, which is extremely attractive for integration with current memory technologies. An innovative technique to detect isolated nano-skyrmions with a current-perpendicular-to-plane is shown, which has immediate implications for device concepts. Such a mechanism is explored by studying the atomistic electronic structure of the magnetic quasiparticles. The tunneling conductance is quite sensitive to spatial variations in the electronic structure, as a large atomistic conductance anisotropy up to 20 is found for magnetic skyrmions in Pd/Fe/Ir(111) magnetic thin-films. This spin-mixing magnetoresistance effect possibly could be incorporated in future magnetic storage technologies.Item Analyzing risk and uncertainty for improving water distribution system security from malevolent water supply contamination events(2009-05-15) Torres, Jacob ManuelPrevious efforts to apply risk analysis for water distribution systems (WDS) have not typically included explicit hydraulic simulations in their methodologies. A risk classification scheme is here employed for identifying vulnerable WDS components subject to an intentional water contamination event. A Monte Carlo simulation is conducted including uncertain stochastic diurnal demand patterns, seasonal demand, initial storage tank levels, time of day of contamination initiation, duration of contamination event, and contaminant quantity. An investigation is conducted on exposure sensitivities to the stochastic inputs and on mitigation measures for contaminant exposure reduction. Mitigation measures include topological modifications to the existing pipe network, valve installation, and an emergency purging system. Findings show that reasonable uncertainties in model inputs produce high variability in exposure levels. It is also shown that exposure level distributions experience noticeable sensitivities to population clusters within the contaminant spread area. The significant uncertainty in exposure patterns leads to greater resources needed for more effective mitigation.Item Application of Dynamic Monte Carlo Technique in Proton Beam Radiotherapy using Geant4 Simulation Toolkit(2012-04-27) Guan, Fada 1982-Monte Carlo method has been successfully applied in simulating the particles transport problems. Most of the Monte Carlo simulation tools are static and they can only be used to perform the static simulations for the problems with fixed physics and geometry settings. Proton therapy is a dynamic treatment technique in the clinical application. In this research, we developed a method to perform the dynamic Monte Carlo simulation of proton therapy using Geant4 simulation toolkit. A passive-scattering treatment nozzle equipped with a rotating range modulation wheel was modeled in this research. One important application of the Monte Carlo simulation is to predict the spatial dose distribution in the target geometry. For simplification, a mathematical model of a human body is usually used as the target, but only the average dose over the whole organ or tissue can be obtained rather than the accurate spatial dose distribution. In this research, we developed a method using MATLAB to convert the medical images of a patient from CT scanning into the patient voxel geometry. Hence, if the patient voxel geometry is used as the target in the Monte Carlo simulation, the accurate spatial dose distribution in the target can be obtained. A data analysis tool?root was used to score the simulation results during a Geant4 simulation and to analyze the data and plot results after simulation. Finally, we successfully obtained the accurate spatial dose distribution in part of a human body after treating a patient with prostate cancer using proton therapy.Item Characterization Methodology for Decommissioning Low and Intermediate Level Fissile Nuclide Contaminated Buried Soils and Process Piping Using Photon Counting(2014-05-03) Pritchard, Megan LA new approach to- and method for characterization of fissile nuclide contaminated soils and process piping has been developed and implemented for low and intermediate level wastes, using new calibration bases for photon counting. The method has been validated by integrating the capabilities of MCNP5 and ISOCS for a LaBr scintillator detector in combination with known radioactive standards. In addition, the developed methods consider nuclear safety as the priority while retaining realistic fissile mass and enrichment estimation techniques. The impact of a quick, portable non-destructive assay process to the decommissioning and remediation arena is extremely valuable. Traditional methods have inherent limitations in time consumption, resources, stability, and rigidity. In addition to optimizing a material blending and storage program, gaining a real-time understanding to the nature of fissile material prior to disturbance aids a nuclear safety program and culture invaluably. In this dissertation, detailed detector-waste models were developed and utilized to create a quick uranium mass and enrichment estimation process by taking advantage of the resolution and discrimination capabilities of the LaBr equipped InSpector 1000 instrument. The analysis takes into account multiple possible scenarios that may be encountered during decommissioning and remediation of a fuel fabrication and buried nuclear waste facility, while keeping nuclear safety controls in mind. As an inherent part of the process, the models were validated by performing a series of code-to-software and software-to-standard benchmarking procedures, which provided substantiation for use of the detector for the derived purposes, in addition to ensuring that the Monte Carlo-based calibration approach was conservative, as compared to other methods. The scenarios analyzed for the calibration basis were selected based on historical knowledge and in-field experience at the Westinghouse Hematite Decommissioning Project. The techniques developed in this dissertation offer a new characterization method for fissile material quantity and enrichment with a portable, passive non-destructive gamma assay system without relying on continual macroscopic system analysis. In addition, it provides early detection of large quantities of fissile material prior to exhumation or disturbance to enhance nuclear safety processes. This places the first priority on nuclear and radiological safety while preserving the time and money saving aspects of production-based projects.Item Characterization of a Stochastic Procedure for the Generation and Transport of Fission Fragments within Nuclear Fuels(2013-04-15) Hackemack, Michael WayneWith the ever-increasing demands of the nuclear power community to extend fuel cycles and overall core-lifetimes in a safe and economic manner, it is becoming more necessary to extend the working knowledge of nuclear fuel performance. From the atomistic to the macroscopic level, great morphological changes occur within the fuel over its lifetime. The main initial damaging events produced by fuel recoils from fast neutrons and fission fragment spiking leads to the onset of grain growths and fuel restructuring. Therefore, it is desirable to have a more detailed understanding of the initial events leading to fuel morphology changes at the atomistic level. However, this is difficult to achieve with the fission fragments due to the wide variability of their species (charge, mass, and energy) and the large averaging of their relative yields in the nuclear data files. This work is our first iteration at developing a general methodology to characterize a procedure, based on Monte Carlo principles, for generating individual fission event result channels and analyzing their specific response in the fuel. We utilized the nuclear reaction simulation tool, TALYS, to generate energy-dependent fission fragment yield distributions for different fissile/fissionable isotopes. These distributions can then be used in conjunction with fuel isotopics and a neutron energy spectrum to generate a fission-reaction-rate-averaged distribution of the fission fragment yields. We then used Monte Carlo sampling to generate the result channels from individual fission events, using the Q-value of the prompt fission system to either accept or reject. The simulation tool: Transport of Ions in Matter (TRIM) was used to characterize the general response of the fission fragment species within Uranium Dioxide (UO2), including the range, energy loss, displacements, recoils, etc. These responses were then correlated which allowed for the quick calculation of the response of the individual fission fragment species generated from the Monte Carlo sampling. As an example of this strategy, we calculated the response on a PWR fuel pin where MCNP was used to generate a high-fidelity neutron energy spectrum.Item Continuous reservoir model updating using an ensemble Kalman filter with a streamline-based covariance localization(Texas A&M University, 2007-04-25) Arroyo Negrete, Elkin RafaelThis work presents a new approach that combines the comprehensive capabilities of the ensemble Kalman filter (EnKF) and the flow path information from streamlines to eliminate and/or reduce some of the problems and limitations of the use of the EnKF for history matching reservoir models. The recent use of the EnKF for data assimilation and assessment of uncertainties in future forecasts in reservoir engineering seems to be promising. EnKF provides ways of incorporating any type of production data or time lapse seismic information in an efficient way. However, the use of the EnKF in history matching comes with its shares of challenges and concerns. The overshooting of parameters leading to loss of geologic realism, possible increase in the material balance errors of the updated phase(s), and limitations associated with non-Gaussian permeability distribution are some of the most critical problems of the EnKF. The use of larger ensemble size may mitigate some of these problems but are prohibitively expensive in practice. We present a streamline-based conditioning technique that can be implemented with the EnKF to eliminate or reduce the magnitude of these problems, allowing for the use of a reduced ensemble size, thereby leading to significant savings in time during field scale implementation. Our approach involves no extra computational cost and is easy to implement. Additionally, the final history matched model tends to preserve most of the geological features of the initial geologic model. A quick look at the procedure is provided that enables the implementation of this approach into the current EnKF implementations. Our procedure uses the streamline path information to condition the covariance matrix in the Kalman Update. We demonstrate the power and utility of our approach with synthetic examples and a field case. Our result shows that using the conditioned technique presented in this thesis, the overshooting/undershooting problems disappears and the limitation to work with non- Gaussian distribution is reduced. Finally, an analysis of the scalability in a parallel implementation of our computer code is given.Item Developmental of a Vapor Cloud Explosion Risk Analysis Tool Using Exceedance Methodology(2012-10-19) Alghamdi, SalemIn development projects, designers should take into consideration the possibility of a vapor cloud explosion in the siting and design of a process plant from day one. The most important decisions pertinent to the location of different process areas, separation between different areas, location of occupied buildings and overall layout may be made at the conceptual stage of the project. During the detailed design engineering stage the final calculation of gas explosion loads is an important activity. However, decisions related to the layout and location of occupied buildings at this stage could be very costly. Therefore, at the conceptual phase of the development project for a hydrocarbon facility, it would be helpful to get a picture of possible vapor cloud explosion loads to be used in studying various options. This thesis presents the analytical parameters that are used in vapor cloud explosion risk analysis. It proposes a model structure for the analysis of vapor cloud explosion risks to buildings based on exceedance methodology. This methodology was developed in a computer program which is used to support this thesis. The proposed model considers all possible gas release scenarios through the use of the Monte Carlo simulation. The risk of vapor cloud explosions can be displayed using exceedance curves. The resulting model provides a predictive tool for vapor cloud explosion problems at the early stages of development projects, particularly in siting occupied buildings in onshore hydrocarbon facilities. It can also be used as a quick analytical tool for investigating various aspects of vapor cloud explosions. This model has been applied to a case study, a debutanizer process unit. The model was used to explore the different alternatives of locating a building near the facility. The results from the model were compared to the results of other existing software to determine the model validity. The results show that the model can effectively examine the risk of vapor cloud explosions.Item Dosimetry of Y-90 Liquid Brachytherapy in a Dog with Osteosarcoma Using PET/CT(2011-08-08) Zhou, JingjieA novel Y-90 liquid brachytherapy strategy is currently being studied for the treatment of osteosarcoma using a preclinical translational model in dogs to assess its potential efficacy and toxicity. In this study, dosimetry calculations are performed for Y-90 liquid brachytherapy in a dog with osteosarcoma using the Geant4 Monte Carlo code. A total of 611.83 MBq Y-90 radiopharmaceutical is administered via direct injections, and the in vivo distribution of Y-90 is assessed using a time-of-flight (TOF) PET/CT scanner. A patient-specific geometry is built using anatomical data obtained from CT images. The material properties of tumor and surrounding tissues are calculated based on a CT number - electron density calibration. The Y-90 distribution is sampled in Geant4 from PET images using a collapsing 3-D rejection technique to determine the decay sites. Dose distributions in the tumor bed and surrounding tissues are calculated demonstrating significant heterogeneity with multiple hot spots at the injection sites. Dose volume histograms show about 33.9 percent of bone and tumor and 70.2 percent of bone marrow and trabecular bone receive a total dose over 200 Gy; about 3.2 percent of bone and tumor and 31.0 percent of bone marrow and trabecular bone receive a total dose of over 1000 Gy. Y-90 liquid brachytherapy has the potential to be used as an adjuvant therapy or for palliation purposes. Future work includes evaluation of pharmacokinetics of the Y-90 radiopharmaceutical, calibration of PET/CT scanners for the direct quantitative assessment of Y-90 activity concentration, and assessment of efficacy of the Y-90 liquid brachytherapy strategy.Item Experimental and computation study of protein interactions with lipid nanodomains(2013-05) Qiu, Liming; Cheng, Kelvin K.; Vaughn, Mark W.; Sanati, Mahdi; Khare, Rajesh; Quitevis, Edward L.Protein lipid interactions are significantly relevant to understanding of a wide variety of biological phenomena in general. In particular, human beta-amyloid protein is closely related to the pathogenesis of Alzheimer's disease. Due to its high propensity to self-aggregate, beta-amyloid protein is difficult to study with experiments. Molecular dynamics simulations is capable of providing atomistic details of the protein lipid interactions; therefore, is an important theoretical tool to investigate these subtle interactions and offer insights to the pathogenesis of Alzheimer's disease. In this dissertation, I studies the protein lipid interactions with several systems with different lipid composition and protein conformations. I developed computational tools to quantitatively analyze lipid perturbations due to protein interactions, since it is commonly believed that the neurotoxicity of beta-amyloid protein is through perturbation of the lipid membrane. I discovered that for the case of a beta-amyloid dimer on the surface of lipid bilayers, the perturbation effect of protein is correlated to the degree of disorder of the protein in term of its secondary structure. Meanwhile, for a system where a beta-amyloid protein was partially inserted into the bilayer, the protein insertion rate was regulated by both the secondary structure of the protein and the lipid environment. Especially, a scaling relation between the insertion rate and degree of disorder was found. Even though molecular dynamics simulations is a powerful tool in studying atomistic protein lipid interactions, it is not efficient in sampling the free energy landscape of the system; hence results are biased by the initial structure of the system. I developed a multiscale molecular simulation scheme to increase the efficiency in free energy landscape sampling by switching the system between different spatial resolutions, i.e., atomistic and coarse-grain representations of the system. Using this method, I discovered a novel protein lipid orientation, which has implications in understanding the biochemical pathway of the protein as well as developing therapeutic interventions. Finally, I also developed a Monte Carlo method to estimate molecule volumes accurate to atomistic scale. This method is directly applicable to lipid membrane system with heterogeneous components including proteins; it is a useful tool for not only investigating protein lipid interactions but also calibration of force field parameters for classical molecular dynamics simulations.Item Exponentially-convergent Monte Carlo for the One-dimensional Transport Equation(2014-04-23) Peterson, Jacob RossAn exponentially-convergent Monte Carlo (ECMC) method is analyzed using the one-group, one-dimension, slab-geometry transport equation. The method is based upon the use of a linear discontinuous finite-element trial space in position and direction to represent the transport solution. A space-angle h-adaptive algorithm is employed to maintain exponential convergence after stagnation occurs due to in- adequate trial-space resolution. In addition, a biased sampling algorithm is used to adequately converge singular problems. Computational results are presented demonstrating the efficacy of the new approach. We tested our ECMC algorithm against standard Monte Carlo and found the ECMC method to be generally much more efficient. For a manufacture solution the ECMC algorithm was roughly 200 times more effective than the standard Monte Carlo. When considering a highly singular pure attenuation problem, the ECMC method was roughly 4000 times more effective.Item Methods for Composing Tradeoff Studies under Uncertainty(2012-10-19) Bily, ChristopherTradeoff studies are a common part of engineering practice. Designers conduct tradeoff studies in order to improve their understanding of how various design considerations relate to one another. Generally a tradeoff study involves a systematic multi-criteria evaluation of various alternatives for a particular system or subsystem. After evaluating these alternatives, designers eliminate those that perform poorly under the given criteria and explore more carefully those that remain. The capability to compose preexisting tradeoff studies is advantageous to the designers of engineered systems, such as aircraft, military equipment, and automobiles. Such systems are comprised of many subsystems for which prior tradeoff studies may exist. System designers conceivably could explore system-level tradeoffs more quickly by leveraging this knowledge. For example, automotive systems engineers could combine tradeoff studies from the engine and transmission subsystems quickly to produce a comprehensive tradeoff study for the power train. This level of knowledge reuse is in keeping with good systems engineering practice. However, existing procedures for generating tradeoff studies under uncertainty involve assumptions that preclude engineers from composing them in a mathematically rigorous way. In uncertain problems, designers can eliminate inferior alternatives using stochastic dominance, which compares the probability distributions defined in the design criteria space. Although this is well-founded mathematically, the procedure can be computationally expensive because it typically entails a sampling-based uncertainty propagation method for each alternative being considered. This thesis describes two novel extensions that permit engineers to compose preexisting subsystem-level tradeoff studies under uncertainty into mathematically valid system-level tradeoff studies and efficiently eliminate inferior alternatives through intelligent sampling. The approaches are based on three key ideas: the use of stochastic dominance methods to enable the tradeoff evaluation when the design criteria are uncertain, the use of parameterized efficient sets to enable reuse and composition of subsystem-level tradeoff studies, and the use of statistical tests in dominance testing to reduce the number of behavioral model evaluations. The approaches are demonstrated in the context of a tradeoff study for a motor vehicle.Item Modeling observed target localization error using bistatic reflection(2016-08) Simms, Andrew Paul; Bovik, Alan C. (Alan Conrad), 1958-; Mitchell, JerryBistatic Sonar involves the transmission of a signal from a source, reflection of the signal from a target, and the reception of the signal by a receiver. Real-world environmental errors make target localization for underwater bistatic sonar a difficult task. The bistatic equation is used to calculate the range between the receiver and target location using geometric information, the travel time of the signal from the source to the receiver, and the estimated underwater sound speed. Using the receiver to target range and bearing allows the receive ship to observe where the target ship is located. Due to the complexity of the bistatic equation, it is necessary to model these real-world environmental errors with computer simulations to improve target localization for bistatic sonar. In this thesis, Monte Carlo simulations will be used to model bistatic sonar for two different real-world environments using three likely error input scenarios and also to determine the variables that have the most influence on target localization error.Item Modeling semiconductor performance and yield with empirical data using Monte Carlo methods(Texas Tech University, 2009-08) Wilde, Jason A.In this dissertation, a Monte Carlo semiconductor performance model based on empirical relationships is introduced. This novel approach results in a very low input dimension macromodel based on a small training sample and is shown to have equal or better precision and accuracy than a typical high dimension multivariate regression model. In order to compensate for input dimension, the regression error, which is typically neglected, is characterized and used as an input to a Monte Carlo model. This error modeling technique intentionally induces error into the model in an attempt to improve precision on long term forecasts. In addition, these techniques allow a sensitivity analysis and forecast to be made based on transistor targets only, meaning that no test lots are required to tune the process. The techniques described in this dissertation may also have other applications, because they can be applied to any situation that requires highly characterized outputs based on a small sample of inputs from a much larger population.Item Monte Carlo Electromagnetic Cross Section Production Method for Low Energy Charged Particle Transport Through Single Molecules(2013-08-13) Madsen, Jonathan RThe present state of modeling radio-induced effects at the cellular level neglects to account for the microscopic inhomogeneity of the nucleus from the non-aqueous contents by approximating the entire cellular nucleus as a homogenous medium of water. Charged particle track-structure calculations utilizing this principle of superposition are thereby neglecting to account for approximately 30% of the molecular variation within the nucleus. To truly understand what happens when biological matter is irradiated, charged particle track-structure calculations need detailed knowledge of the secondary electron cascade, resulting from interactions with not only the primary biological component ? water ? but also the non-aqueous contents, down to very low energies. This paper presents developments for a novel approach, which to our knowledge has never been done before, to reducing the homogenous water approximation. The purpose of our work is to develop of a completely self-consistent computational method for predicting molecule-specific ionization, excitation, and scattering cross sections in the very low energy regime that can be applied in a condensed history Monte Carlo track-structure code. The present methodology begins with the calculation of a solution to the many-body Schr?dinger equation and proceeds to use Monte Carlo methods to calculate the perturbations in the internal electron field to determine the aforementioned processes. Results are computed for molecular water in the form of linear energy loss, secondary electron energies, and ionization-to-excitation ratios and compared against the low energy predictions of the GEANT4-DNA physics package of the Geant4 simulation toolkit.Item A Monte Carlo investigation of multilevel modeling in meta-analysis of single-subject research data(2011-08) Mulloy, Austin Madison; Beretvas, Susan Natasha; O'Reilly, Mark F.; Zuna, Nina I.; Falcomata, Terry; Pituch, KeenanMultilevel modeling represents a potentially viable method for meta-analyzing single-subject research, but questions remain concerning its methodological properties with regard to characteristics of single-subject data. For this dissertation, Monte Carlo methods were used to investigate the properties of a 3 level model (i.e., with a quadratic equation at level 1), and three different level 1 error specifications (i.e., different variance components and covariances of 0, lag-1 autoregressive covariance structures, and separate error terms for each phase, with different variance components and covariances of 0). Data for simulated subjects were generated to have characteristics typical of published single-subject data (e.g., typical variances and magnitudes of effect). Samples were simulated for conditions which varied in number of data points per phase, number of subjects per study, number of studies meta-analyzed, level of autocorrelation in residuals, and continuity of variance across phases. Outcome variables examined included rates of convergence of analyses, power for statistical tests of fixed effects, and relative parameter bias of estimates of fixed effects, random effects’ variance components, and autocorrelation estimates. Convergence rates were found to be 100% for all level 1 error specifications and data conditions. Power for statistical tests of fixed effects was observed to be adequate when 10 or more data points were generated per phase and 60 or more total subjects were included in meta-analyses. The relative biases of estimates of fixed effects were found to have limited associations with numbers of data points per phase, levels of autocorrelation, and the continuity/discontinuity of variance across phases. Random effects’ variance components were observed to be frequently biased. Associations between relative bias and data conditions were found to vary by random effect. Finally, autocorrelation estimates were found to be biased in all conditions for which autocorrelation was generated. Results are discussed with regard to study strengths and limitations, and their implications for the meta-analysis of single subject data and primary single subject research.Item Monte Carlo simulations of solid walled proportional counters with different site size for HZE radiation(2009-05-15) Wang, XudongCharacterizing high z high energy (HZE) particles in cosmic radiation is of importance for the study of the equivalent dose to astronauts. Low pressure, tissue equivalent proportional counters (TEPC) are routinely used to evaluate radiation exposures in space. A multiple detector system composed of three TEPC of different sizes was simulated using the Monte-Carlo software toolkit GEANT4. The ability of the set of detectors to characterize HZE particles, as well as measure dose, was studied. HZE particles produce energetic secondary electrons (-rays) which carry a significant fraction of energy lost by the primary ion away from its track. The range and frequency of these delta rays depends on the velocity and charge of the primary ion. Measurements of lineal energy spectra in different size sites will differ because of these delta ray events and may provide information to characterize the incident primary particle. Monte Carlo calculations were accomplished, using GEANT4, simulating solid walled proportional detectors with unit density site diameter of 0.1, 0.5 and 2.5 ?m in a uniform HZE particle field. The simulated spherical detectors have 2 mm thick tissue equivalent walls. The uniform beams of 1 GeV/n, 500 MeV/n and 100 MeV/n 56Fe, 28Si, 16O, 4He and proton particles were used to bombard the detector. The size effect of such a detector system was analyzed with the calculation results. The results show that the y vs. yf(y) spectrum differs significantly as a function of site size. From the spectra, as well as the calculated mean lineal energy, the simulated particles can be characterized. We predict that the detector system is capable of characterizing HZE particles in a complex field. This suggests that it may be practical to use such a system to measure the average particle velocity as well as the absorbed dose delivered by HZE particles in space. The parameters used in the simulation are also good references for detector construction. characterizing HZE particles in a complex field. This suggests that it may be practical to use such a system to measure the average particle velocity as well as the absorbed dose delivered by HZE particles in space. The parameters used in the simulation are also good references for detector construction.Item Near infrared laser propagation and absorption analysis in tissues using forward and inverse Monte Carlo methods(2014-05) Nasouri, Babak; Berberoglu, HalilFor understanding the mechanisms of low level laser/light therapy (LLLT), accurate knowledge of light interaction with tissue is necessary. In order to have a successful therapy, laser energy needs to be delivered effectively to the target location which depending on the application can be within various layers of skin or deeper. The energy deposition is controlled by input parameters such as wavelength, beam profile and laser power, which should be selected appropriately. This thesis reports a numerical study that investigates the laser penetration through the human skin and also provides a scale for selection of wavelength, beam profile and laser power for therapeutic applications. First, human skin is modeled as a three-layer participating medium, namely epidermis, dermis, and subcutaneous, where its geometrical and optical properties were obtained from the literature. Both refraction and reflection are taken into account at the boundaries according to Snell’s law and Fresnel relations. Then, a three dimensional multi-layer reduced-variance Monte Carlo tool was implemented to simulate the laser penetration and absorption through the skin. Local profiles of light penetration and volumetric absorption densities were simulated for uniform as well as Gaussian profile beams with different spreads at 155 mW average power over the spectral range from 1000 nm to 1900 nm. The results showed that lasers within this wavelength range could be used to effectively and safely deliver energy to specific skin layers as well as to achieve large penetration depths for treating deep tissues, without causing any skin damage. In addition, by changing the beam profile from uniform to Gaussian, the local volumetric dosage could be increased as much as three times for otherwise similar lasers. In the second part of this thesis, a three-dimensional single-layer reduced-variance inverse Monte Carlo method was developed to find the optical properties of the skin using the experimental values of transmittance and reflectance. The results showed that both transmittance and reflectance scale well with transport optical thickness. Moreover, it was also shown that penetration depth is highly sensitive to the laser wavelength and varied within the range from 1.7 mm to 4.5 mm.Item Optimization of a petroleum producing assets portfolio: development of an advanced computer model(2009-05-15) Aibassov, GizatullaPortfolios of contemporary integrated petroleum companies consist of a few dozen Exploration and Production (E&P) projects that are usually spread all over the world. Therefore, it is important not only to manage individual projects by themselves, but to also take into account different interactions between projects in order to manage whole portfolios. This study is the step-by-step representation of the method of optimizing portfolios of risky petroleum E&P projects, an illustrated method based on Markowitz?s Portfolio Theory. This method uses the covariance matrix between projects? expected return in order to optimize their portfolio. The developed computer model consists of four major modules. The first module generates petroleum price forecasts. In our implementation we used the price forecasting method based on Sequential Gaussian Simulation. The second module, Monte Carlo, simulates distribution of reserves and a set of expected production profiles. The third module calculates expected after tax net cash flows and estimates performance indicators for each realization, thus yielding distribution of return for each project. The fourth module estimates covariance between return distributions of individual projects and compiles them into portfolios. Using results of the fourth module, analysts can make their portfolio selection decisions. Thus, an advanced computer model for optimization of the portfolio of petroleum assets has been developed. The model is implemented in a MATLAB? computational environment and allows optimization of the portfolio using three different return measures (NPV, GRR, PI). The model has been successfully applied to the set of synthesized projects yielding reasonable solutions in all three return planes. Analysis of obtained solutions has shown that the given computer model is robust and flexible in terms of input data and output results. Its modular architecture allows further inclusion of complementary ?blocks? that may solve optimization problems utilizing different measures (than considered) of risk and return as well as different input data formats.