Browsing by Subject "streamline"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item A rigorous compressible streamline formulation for black oil and compositional simulation(Texas A&M University, 2007-04-25) Osako, IchiroIn this study for the first time we generalize streamline models to compressible flow using a rigorous formulation while retaining most of its computational advantages. Our new formulation is based on three major elements and requires only minor modifications to existing streamline models. First, we introduce a relative density for the total fluids along the streamlines. This density captures the changes in the fluid volume with pressure and can be conveniently and efficiently traced along streamlines. Thus, we simultaneously compute time of flight and volume changes along streamlines. Second, we incorporate a density-dependent source term in the streamline saturation/composition conservation equation to account for compressibility effects. Third, the relative density, fluid volumes and the time-of-flight information are used to incorporate cross-streamline effects via pressure updates and remapping of saturations. Our proposed approach preserves the 1-D nature of the conservation calculations and all the associated advantages of the streamline approach. The conservation calculations are fully decoupled from the underlying grid and can be carried out using large time steps without gridbased stability limits. We also extend the streamline simulation to compositional modeling including compressibility effects. Given the favorable computational scaling properties of streamline models, the potential advantage for compositional simulation can be even more compelling. Although several papers have discussed compositional simulation formulation, they all suffer from a major limitation, particularly for compressible flow. All of the previous works assume, either explicitly or implicitly, that the divergence of total flux along streamlines is negligible. This is not only incorrect for compressible flow but also introduces inconsistency between the pressure and conservation equations. We examine the implications of these assumptions on the accuracy of compositional streamline simulation using a novel and rigorous treatment of compressibility. We demonstrated the validity and practical utility of our approach using synthetic and field examples and comparison with a finite difference simulator. Throughout the validation for compositional model, we found out the importance of finer segments discretizations along streamlines. We introduce optimal coarsening of segments to minimize flash calculations on each segment while keeping the accuracy of finer segments.Item An efficient Bayesian formulation for production data integration into reservoir models(Texas A&M University, 2005-02-17) Leonardo, Vega VelasquezCurrent techniques for production data integration into reservoir models can be broadly grouped into two categories: deterministic and Bayesian. The deterministic approach relies on imposing parameter smoothness constraints using spatial derivatives to ensure large-scale changes consistent with the low resolution of the production data. The Bayesian approach is based on prior estimates of model statistics such as parameter covariance and data errors and attempts to generate posterior models consistent with the static and dynamic data. Both approaches have been successful for field-scale applications although the computational costs associated with the two methods can vary widely. This is particularly the case for the Bayesian approach that utilizes a prior covariance matrix that can be large and full. To date, no systematic study has been carried out to examine the scaling properties and relative merits of the methods. The main purpose of this work is twofold. First, we systematically investigate the scaling of the computational costs for the deterministic and the Bayesian approaches for realistic field-scale applications. Our results indicate that the deterministic approach exhibits a linear increase in the CPU time with model size compared to a quadratic increase for the Bayesian approach. Second, we propose a fast and robust adaptation of the Bayesian formulation that preserves the statistical foundation of the Bayesian method and at the same time has a scaling property similar to that of the deterministic approach. This can lead to orders of magnitude savings in computation time for model sizes greater than 100,000 grid blocks. We demonstrate the power and utility of our proposed method using synthetic examples and a field example from the Goldsmith field, a carbonate reservoir in west Texas. The use of the new efficient Bayesian formulation along with the Randomized Maximum Likelihood method allows straightforward assessment of uncertainty. The former provides computational efficiency and the latter avoids rejection of expensive conditioned realizations.Item Automatic history matching in Bayesian framework for field-scale applications(Texas A&M University, 2006-04-12) Mohamed Ibrahim Daoud, AhmedConditioning geologic models to production data and assessment of uncertainty is generally done in a Bayesian framework. The current Bayesian approach suffers from three major limitations that make it impractical for field-scale applications. These are: first, the CPU time scaling behavior of the Bayesian inverse problem using the modified Gauss-Newton algorithm with full covariance as regularization behaves quadratically with increasing model size; second, the sensitivity calculation using finite difference as the forward model depends upon the number of model parameters or the number of data points; and third, the high CPU time and memory required for covariance matrix calculation. Different attempts were used to alleviate the third limitation by using analytically-derived stencil, but these are limited to the exponential models only. We propose a fast and robust adaptation of the Bayesian formulation for inverse modeling that overcomes many of the current limitations. First, we use a commercial finite difference simulator, ECLIPSE, as a forward model, which is general and can account for complex physical behavior that dominates most field applications. Second, the production data misfit is represented by a single generalized travel time misfit per well, thus effectively reducing the number of data points into one per well and ensuring the matching of the entire production history. Third, we use both the adjoint method and streamline-based sensitivity method for sensitivity calculations. The adjoint method depends on the number of wells integrated, and generally is of an order of magnitude less than the number of data points or the model parameters. The streamline method is more efficient and faster as it requires only one simulation run per iteration regardless of the number of model parameters or the data points. Fourth, for solving the inverse problem, we utilize an iterative sparse matrix solver, LSQR, along with an approximation of the square root of the inverse of the covariance calculated using a numerically-derived stencil, which is broadly applicable to a wide class of covariance models. Our proposed approach is computationally efficient and, more importantly, the CPU time scales linearly with respect to model size. This makes automatic history matching and uncertainty assessment using a Bayesian framework more feasible for large-scale applications. We demonstrate the power and utility of our approach using synthetic cases and a field example. The field example is from Goldsmith San Andres Unit in West Texas, where we matched 20 years of production history and generated multiple realizations using the Randomized Maximum Likelihood method for uncertainty assessment. Both the adjoint method and the streamline-based sensitivity method are used to illustrate the broad applicability of our approach.Item Stochastic and Deterministic Inversion Methods for History Matching of Production and Time-Lapse Seismic Data(2013-08-26) Watanabe, ShingoAutomatic history matching methods utilize various kinds of inverse modeling techniques. In this dissertation, we examine ensemble Kalman filter as a stochastic approach for assimilating different types of production data and streamline-based inversion methods as a deterministic approach for integrating both production and time-lapse seismic data into high resolution reservoir models. For the ensemble Kalman filter, we develope a physically motivated phase streamline-based covariance localization method to improve data assimilation performance while capturing geologic continuities that affect the flow dynamics and preserving model variability among the ensemble of models. For the streamline-based inversion method, we derived saturation and pressure drop sensitivities with respect to reservoir properties along streamline trajectories and integrated time-lapse seismic derived saturation and pressure changes along with production data using a synthetic model and the Brugge field model. Our results show the importance of accounting for both saturation and pressure changes in the reservoir responses in order to constrain the history matching solutions. Finally we demonstrated the practical feasibility of a proposed structured work- flow for time-lapse seismic and production data integration through the Norne field application. Our proposed method follows a two-step approach: global and local model calibrations. In the global step, we reparameterize the field permeability het- erogeneity with a Grid Connectivity-based Transformation with the basis coefficient as parameters and use a Pareto-based multi-objective evolutionary algorithm to integrate field cumulative production and time-lapse seismic derived acoustic impedance change data. The method generates a suite of trade-off solutions while fitting production and seismic data. In the local step, first the time-lapse seismic data is integrated using the streamline-derived sensitivities of acoustic impedance with respect to reservoir permeability incorporating pressure and saturation effects in-between time-lapse seismic surveys. Next, well production data is integrated by using a generalized travel time inversion method to resolve fine-scale permeability variations between well locations. After model calibration, we use the ensemble of history matched models in an optimal rate control strategy to maximize sweep and injection efficiency by equalizing flood front arrival times at all producers while accounting for geologic uncertainty. Our results show incremental improvement of ultimate recovery and NPV values.Item Timestep selection during streamline simulation via transverse flux correction(Texas A&M University, 2004-09-30) Osako, IchiroStreamline simulators have received increased attention because of their ability to effectively handle multimillion cell detailed geologic models and large simulation models. The efficiency of streamline simulation has relied primarily on their ability to take large timesteps with fewer pressure solutions within an IMPES formulation. However, unlike conventional finite-difference simulators, no clear guidelines are currently available for the choice of timestep for pressure and velocity updates. That is why we need largely an uncontrolled approximation, either managed by engineering judgment or by potentially time-consuming timestep size sensitivity studies early in a project. This will clearly lead us to the lack of understanding of numerical stability and error estimates during the solution. This research presents a novel approach for timestep selection during streamline simulation that is based on three elements. First, we reformulate the equations to be solved by a streamline simulator to include all of the three-dimensional flux terms - both aligned with and transverse to the flow directions. These transverse flux terms are totally neglected within the existing streamline simulation formulations. Second, we propose a simple grid-based corrector algorithm to update the saturation to account for the transverse flux. Third, we provide a discrete CFL (Courant-Friedrich-Levy) formulation for the corrector step that leads to a mechanism to ensure numerical stability via the choice of a stable timestep for pressure updates. This discrete CFL formulation now provides us with the same tools for timestep control as are available within conventional reservoir simulators. We demonstrate the validity and utility of our approach using a series of numerical experiments in homogeneous and heterogeneous ? five-spot patterns at various mobility ratios. For these numerical experiments, we pay particular attention to favorable mobility ratio displacements, as they are known to be challenging to streamline simulation. Our results clearly demonstrate the impact of the transverse flux correction on the accuracy of the solution and on the appropriate choice of timestep, across a range of mobility ratios. The proposed approach eliminates much of the subjectivity associated with streamline simulation, and provides a basis for automatic control of pressure timestep within full field streamline applications.