Browsing by Subject "Forecasting"
Now showing 1 - 19 of 19
Results Per Page
Sort Options
Item An Effective Implementation of Operational Inventory Management(2010-01-16) Sellamuthu, SivakumarThis Record of Study describes the Doctor of Engineering (DE) internship experience at the Supply Chain Systems Laboratory (SCSL) at Texas A&M University. The objective of the internship was to design and develop automation tools to streamline lab operations related to inventory management projects and during that process adapt and/or extend theoretical inventory models according to real-world business complexity and data integrity problems. A holistic approach to automation was taken to satisfy both short-term and long-term needs subject to organizational constraints. A comprehensive software productivity tool was designed and developed that considerably reduced time and effort spent on non-value adding activities. This resulted in standardizing and streamlining data analysis related activities. Real-world factors that significantly influence the data analysis process were identified and incorporated into model specifications. This helped develop an operational inventory management model that accounted for business complexity and data integrity issues commonly encountered during implementation. Many organizational issues including new business strategies, human resources, administration, and project management were also addressed during the course of the internship.Item Development of a data-driven method for selecting candidates for case management intervention in a community's medically indigent population(2008-05) Leslie, Ryan Christopher; Shepherd, Marvin D.The Indigent Care Collaboration (ICC), a partnership of Austin, Texas, safety net providers, gathers encounter data and manages initiatives for the community's medically indigent patients. One such initiative is the establishment of a care management program designed to reduce avoidable hospitalizations. This study developed predictive models designed to take year-one encounter data and predict inpatient utilization in the following two years. The models were calibrated using 2003 through 2005 data for the 41,260 patients with encounters with ICC partner providers in all three years. Predictor variables included prior inpatient admissions, age, sex, and a summary measure of overall health status: the relative risk score produced by the Diagnostic Cost Groups prospective Medicaid risk-adjustment model. Using the 44,738 patients with encounter data in each of years 2004 through 2006 data, the performance of the predictive models was cross-validated and compared against the performance of the "common sense" method of choosing candidate patients based on prior year chronic disease diagnoses and high utilization, referred to herein as the Utilization Method (UM). The 620 patients with three or more 2005 through 2006 inpatient admissions were considered the actual high use patient subset. Each model's highest-risk 620 patients comprised its high-risk subset. Only 344 high-risk patients met the UM’s criteria. Prediction accuracy was described in terms of positive predictive value (PPV), i.e., the proportion of identified high-risk patients who were high-use patients. Three of the predictive models had a PPV of near 25% or greater, with the highest, the linear model using the DCG relative risk score, at 26.8%. The PPV of the UM was 17.1%, lower than that of all predictive models. When all high-risk subsets were limited to 344 patients (the number identified by the UM), the performance of the UM and the predictive models was similar. This study demonstrated that “common sense” targets for case management can be identified via simple filter as effectively as through empirically-based predictive models. However, once the supply of easily identifiable targets is exhausted, predictive models using a measure of health status identify high-risk patients who could not be easily identified by other means.Item Dynamic resource allocation for energy management in data centers(2009-05-15) Rincon Mateus, Cesar AugustoIn this dissertation we study the problem of allocating computational resources and managing applications in a data center to serve incoming requests in such a way that the energy usage, reliability and quality of service considerations are balanced. The problem is motivated by the growing energy consumption by data centers in the world and their overall inefficiency. This work is focused on designing flexible and robust strategies to manage the resources in such a way that the system is able to meet the service agreements even when the load conditions change. As a first step, we study the control of a Markovian queueing system with controllable number of servers and service rates (M=Mt=kt ) to minimize effort and holding costs. We present structural properties of the optimal policy and suggest an algorithm to find good performance policies even for large cases. Then we present a reactive/proactive approach, and a tailor-made wavelet-based forecasting procedure to determine the resource allocation in a single application setting; the method is tested by simulation with real web traces. The main feature of this method is its robustness and flexibility to meet QoS goals even when the traffic behavior changes. The system was tested by simulating a system with a time service factor QoS agreement. Finally, we consider the multi-application setting and develop a novel load consolidation strategy (of combining applications that are traditionally hosted on different servers) to reduce the server-load variability and the number of booting cycles in order to obtain a better capacity allocation.Item Essays on empirical time series modeling with causality and structural change(Texas A&M University, 2006-10-30) Kim, Jin WoongIn this dissertation, three related issues of building empirical time series models for financial markets are investigated with respect to contemporaneous causality, dynamics, and structural change. In the first essay, nation-wide industry information transmission among stock returns of ten sectors in the U.S. economy is examined through the Directed Acyclical Graph (DAG) for contemporaneous causality and Bernanke decomposition for dynamics. The evidence shows that the information technology sector is the most root cause sector. Test results show that DAG from ex ante forecast innovations is consistent with the DAG fro m ex post fit innovations. This supports innovation accounting based on DAGs using ex post innovations. In the second essay, the contemporaneous/dynamic behaviors of real estate and stock returns are investigated. Selected macroeconomic variables are included in the model to explain recent movements of both returns. During 1971-2004, there was a single structural break in October 1980. A distinct difference in contemporaneous causal structure before and after the break is found. DAG results show that REITs take the role of a causal parent after the break. Innovation accounting shows significantly positive responses of real estate returns due to an initial shock in default risk but insignificant responses of stock returns. Also, a shock in short run interest rates affects real estate returns negatively with significance but does not affect stock returns. In the third essay, a structural change in the volatility of five Asian and U.S. stock markets is examined during the post-liberalization period (1990-2005) in the Asian financial markets, using the Sup LM test. Four Asian financial markets (Hong Kong, Japan, Korea, and Singapore) experienced structural changes. However, test results do not support the existence of structural change in volatility for Thailand and U.S. Also, results show that the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) persistent coefficient increases, but the Autoregressive Conditional heteroskedasticity (ARCH) impact coefficient, implying short run adjustment, decreases in Asian markets. In conclusion, when the econometric model is set up, it is necessary to consider contemporaneous causality and possible structural breaks (changes). The dissertation emphasizes causal inference and structural consistency in econometric modeling. It highlights their importance in discovering contemporaneous/dynamic causal relationships among variables. These characteristics will likely be helpful in generating accurate forecasts.Item Essays on financial and international economics(2009-05-15) Su, XiaojingItem Essays on macroeconomics and forecasting(Texas A&M University, 2006-10-30) Liu, DandanThis dissertation consists of three essays. Chapter II uses the method of structural factor analysis to study the effects of monetary policy on key macroeconomic variables in a data rich environment. I propose two structural factor models. One is the structural factor augmented vector autoregressive (SFAVAR) model and the other is the structural factor vector autoregressive (SFVAR) model. Compared to the traditional vector autogression (VAR) model, both models incorporate far more information from hundreds of data series, series that can be and are monitored by the Central Bank. Moreover, the factors used are structurally meaningful, a feature that adds to the understanding of the ??????black box?????? of the monetary transmission mechanism. Both models generate qualitatively reasonable impulse response functions. Using the SFVAR model, both the ??????price puzzle?????? and the ??????liquidity puzzle?????? are eliminated. Chapter III employs the method of structural factor analysis to conduct a forecasting exercise in a data rich environment. I simulate out-of-sample real time forecasting using a structural dynamic factor forecasting model and its variations. I use several structural factors to summarize the information from a large set of candidate explanatory variables. Compared to Stock and Watson (2002)??????s models, the models proposed in this chapter can further allow me to select the factors structurally for each variable to be forecasted. I find advantages to using the structural dynamic factor forecasting models compared to alternatives that include univariate autoregression (AR) model, the VAR model and Stock and Watson??????s (2002) models, especially when forecasting real variables. In chapter IV, we measure U.S. technology shocks by implementing a dual approach, which is based on more reliable price data instead of aggregate quantity data. By doing so, we find the relative volatility of technology shocks and the correlation between output fluctuation and technology shocks to be much smaller than those revealed in most real-business-cycle (RBC) studies. Our results support the findings of Burnside, Eichenbaum and Rebelo (1996), who showed that the correlation between technology shocks and output is exaggerated in the RBC literature. This suggests that one should examine other sources of fluctuations for a better understanding of the business cycle phenomena.Item Excel model for electric markets : ERCOT(2016-05) Cuevas, Pedro Pablo; Dyer, James S.; Butler, John C.(Clinical associate professor); Hahn, JoeThe effects of changing regulatory and fuel-cost environments have far reaching implications on the ability of electric markets to plan and provide cheap, clean, and reliable electric grids. The current state of the art tools for modeling the regulations and fuel prices requires days to process and access to these tools is also held by a small number licensed users that must also have the training and technical ability to run the model, which limits the study of planning and electricity market design.. This thesis presents an Excel model that simulates the operations of ERCOT over the next fifteen years. Tradeoffs between accuracy, run time, cost, and model complexity will be discussed. The advantages of this model are speed and accessibility, which will allow more users to understand the major implications of policy discussions and scenarios without needing a commercial tool. The model predicts the fuel mix and average market price for 2014 with less than a 1% and 2% error respectively. For 2015, the model predicts the fuel mix with less than a 5% error. Using the current trends assumptions, the model predicts that by 2030 the energy mix will undergo significant changes. Coal generation will drop from 28% to 21%, while gas generation will decline from 48% to 46%. Renewable generation will increase with wind going from 12% to 17% and solar from 0% to 7%. The model also predicts that a carbon tax between $20 and $60 per short ton of CO2, could rise the operational and capital costs of ERCOT in present value terms until 2030 from $75 billion to $218 billion. Finally the model forecasts that the reserve margin in ERCOT will not reach the target of 13.75% in 2020 and that renewable energy addition does not affect this indicator. Even more, the reserve margin is increased when solar energy enters the market.Item Forecasting derecho intensity using model instability parameters(Texas Tech University, 2000-12) Berteau, Mark DonaldUpon reaching the American Midwest on their migration westward, the early European settlers encountered ferocious windstorms more intense and frequent than experienced in their homeland. One type of intense convective storm with damaging straight-line winds was termed the "derecho" by Gustavus Hinrichs, a forecaster with the Iowa Weather Bureau, in the late 1800's (Hinrichs, 1888). In Spanish, derecho means 'straight'; a correlation to tornado, which is a variation on the Spanish word for thunderstorm, tronada. Derechos are linear in nature and often contain severe weather producing features such as the bow echo and line echo wave pattern, or LEWP. These mesoscale storm complexes are responsible for much of the damage associated with severe thunderstorms in the United States.Item Forecasting of isothermal enhanced oil recovery (EOR) and waterflood processes(2011-12) Mollaei, Alireza; Delshad, Mojdeh; Lake, Larry W.; Patzek, Tadeusz W.; Edgar, Thomas F.; Lasdon, Leon S.Oil production from EOR and waterflood processes supplies a considerable amount of the world's oil production. Therefore, the screening and selection of the best EOR process becomes important. Numerous steps are involved in evaluating EOR methods for field applications. Binary screening guides in which reservoirs are selected on the basis of reservoir average rock and fluid properties are consulted for initial determination of applicability. However, quick quantitative comparisons and performance predictions of EOR processes are more complicated and important than binary screening that are the objectives of EOR forecasting. Forecasting (predicting) the performance of EOR processes plays an important role in the study, design and selection of the best method for a particular reservoir or a collection of reservoirs. In EOR forecasting, we look for finding ways to get quick quantitative results of the performance of different EOR processes using analytical model/s before detailed numerical simulations of the reservoirs under study. Although numerical simulation of the reservoirs is widely used, there are significant obstacles that restrict its applicability. Lack of necessary reservoir data and time consuming computations and analyses can be barriers even for history matching and/or predicting EOR/waterflood performance of one reservoir. There are different forecasting (predictive) models for evaluation of different secondary/tertiary recovery methods. However, lack of a general purpose EOR/waterflood forecasting model is unsatisfactory because any differences in results can be caused by differences in the model rather than differences in the processes. As the main objective of this study, we address this deficiency by presenting a novel and robust analytical-base general EOR and waterflood forecasting model/tool (UTF) that does not rely on conventional numerical simulation. The UTF conceptual model is based on the fundamental law of material balance, segregated flow and fractional flux theories and is applied for both history matching and forecasting the EOR/waterflood processes. The forecasting model generates the key results of isothermal EOR and waterflooding processes including variations of average oil saturation, recovery efficiency, volumetric sweep efficiency, oil cut and oil rate with real or dimensionless time. The forecasting model was validated against field data and numerical simulation results for isothermal EOR and waterflooding processes. The forecasting model reproduced well (R2> 0.8) all of the field data and reproduced the simulated data even better. To develop the UTF for forecasting when there is no injection/production history data, we used experimental design and numerical simulation and successfully generated the in-situ correlations (response surfaces) of the forecasting model variables. The forecasting model variables were proven to be well correlated to reservoir/recovery process variables and can be reliably used for forecasting. As an extension to the abilities of the forecasting model, these correlations were used for prediction of volumetric sweep efficiency and missing/dynamic pore volume of EOR and waterflooding processes.Item Forecasting of sick leave usage among nurses via artificial neural networks(2010-12) Tondukulam Seeth, Srikanth; Hasenbein, John J.; Popova, ElmiraThis report examines the trends in sick leave usage among nurses in a hospital and aims at creating a forecasting model to predict sick leave usage on a weekly basis using the concept of artificial neural networks (ANN). The data used for the research includes the absenteeism (sick leave) reports for 3 years at a hospital. The analysis shows that there are certain factors that lead to a rise or fall in the weekly sick leave usage. The ANN model tries to capture the effect of these factors and forecasts the sick leave usage for a 1 year horizon based on what it has learned from the behavior of the historical data from the previous 2 years. The various parameters of the model are determined and the model is constructed and tested for its forecasting ability.Item Forecasting potential project risks through leading indicators to project outcome(Texas A&M University, 2007-09-17) Choi, Ji WonDuring project execution, the status of the project is periodically evaluated, using traditional methods or standard practices. However, these traditional methods or standard practices may not adequately identify certain issues, such as lack of sufficient identification of warning signs that predict potential project failure. Current methods may lack the ability to provide real time indications of emerging problems that impact project outcomes in a timely manner. To address this problem, the Construction Industry Institute (CII) formed a research team to develop a new tool that can forecast the potential risk of not meeting specific project outcomes based on assessing leading indicators. Thus, the leading indicators were identified and then the new tool was developed and validated. A screening process was conducted through industry surveys after identifying potential leading indicators. Each time, industry professionals were asked to evaluate the negative impact of leading indicators on project outcomes that were identified to measure the impact of leading indicators on project health. Through this process, forty-three leading indicators were acquired finally. Using descriptive statistics, the amount of negative impact of each leading indicator on project outcomes was identified after the analysis of the survey results. Based on these impacts, the tool development was initiated. The tool concept is that no indication of problems based on assessing leading indicators results in the tool output score high. To comply with this concept, specific weights were assigned to each leading indicator to reflect the impact on each project outcome. By this procedure, the Project Health Indicator (PHI) tool was developed. The validation process of the PHI tool was conducted using completed projects and finally negative correlation was observed between project outcomes and health scores generated by the PHI tool.Item Forecasting project progress and early warning of project overruns with probabilistic methods(2009-05-15) Kim, Byung CheolForecasting is a critical component of project management. Project managers must be able to make reliable predictions about the final duration and cost of projects starting from project inception. Such predictions need to be revised and compared with the project?s objectives to obtain early warnings against potential problems. Therefore, the effectiveness of project controls relies on the capability of project managers to make reliable forecasts in a timely manner. This dissertation focuses on forecasting project schedule progress with probabilistic methods. Currently available methods, for example, the critical path method (CPM) and earned value management (EVM) are deterministic and fail to account for the inherent uncertainty in forecasting and project performance. The objective of this dissertation is to improve the predictive capabilities of project managers by developing probabilistic forecasting methods that integrate all relevant information and uncertainties into consistent forecasts in a mathematically sound procedure usable in practice. In this dissertation, two probabilistic methods, the Kalman filter forecasting method (KFFM) and the Bayesian adaptive forecasting method (BAFM), were developed. The KFFM and the BAFM have the following advantages over the conventional methods: (1) They are probabilistic methods that provide prediction bounds on predictions; (2) They are integrative methods that make better use of the prior performance information available from standard construction management practices and theories; and (3) They provide a systematic way of incorporating measurement errors into forecasting. The accuracy and early warning capacity of the KFFM and the BAFM were also evaluated and compared against the CPM and a state-of-the-art EVM schedule forecasting method. Major conclusions from this research are: (1) The state-of-the-art EVM schedule forecasting method can be used to obtain reliable warnings only after the project performance has stabilized; (2) The CPM is not capable of providing early warnings due to its retrospective nature; (3) The KFFM and the BAFM can and should be used to forecast progress and to obtain reliable early warnings of all projects; and (4) The early warning capacity of forecasting methods should be evaluated and compared in terms of the timeliness and reliability of warning in the context of formal early warning systems.Item FORECASTING THE REAL ESTATE MARKET: A COINTEGRATED APPROACH(2012-04-19) Seth, Jaweria; Prodan Boul, Ruxandra; Jiu, Brett; Nikolsko-Rzhevskyy, AlexResearch has shown that a decline in residential investments signals an impending decline in economic activity. Sources of demand for both residential and commercial real estate sectors are similar and this should move the markets in the same direction over the long-run. Since the residential market has already collapsed, the study of real estate investments is important. This paper utilizes real estate and macroeconomic data to forecast investment loans. Cointegration methods are used for the forecast because the data displays a tendency to move together. The results show that the forecast is inconsistent with the positive relationship between both real estate markets; the residential market will continue to decline, whereas the commercial market with see a positive growth from 2011-2012.Item Inevitable disappointment and decision making based on forecasts(2006) Chen, Min; Dyer, JamesItem Propane demand modeling for residential sectors- A regression analysis(2011-05) Shenoy, Nitin K.; Smith, Milton L.; Kobza, John E.; Simonton, James L.This thesis presents a forecasting model for the propane consumption within the residential sector. In this research we explore the dynamic behavior of different variables that affect the propane consumption and develop a forecasting model. The significant factors that had an impact on the propane consumption in houses were heating degree days of that area, wind speed, precipitation and the size of the houses. However in case of mobile homes only the heating degree days had significance. The behavior of the customers was assumed to be static. This model is based on multiple regression methods. The data was collected from a local propane company in West Texas. Different combinations of months were used in this model to study the propane consumption behavior for each month. These different studies were used to generate the final forecasting model. As propane consumption was low for the months from June to September, the best results were obtained when the data for the months from October through May was used for analysis. The results indicate that the forecasting model provides a potentially useful forecast.Item Radar-Derived Forecasts of Cloud-to-Ground Lightning Over Houston, Texas(2011-02-22) Mosier, Richard MatthewTen years (1997 - 2006) of summer (June, July, August) daytime (14 - 00 Z) Weather Surveillance Radar - 1988 Doppler data for Houston, TX were examined to determine the best radar-derived lightning forecasting predictors. Convective cells were tracked using a modified version of the Storm Cell Identification and Tracking (SCIT) algorithm and then correlated to cloud-to-ground lightning data from the National Lightning Detection Network (NLDN). Combinations of three radar reflectivity values (30, 35, and 40 dBZ) at four isothermal levels (-10, -15, -20, and updraft -10 degrees C) and a new radar-derived product, vertically integrated ice (VII), were used to optimize a radar-based lightning forecast algorithm. Forecasts were also delineated by range and the number of times a cell was identified and tracked by the modified SCIT algorithm. This study objectively analyzed 65,399 unique cells, and 1,028,510 to find the best lightning forecast criteria. Results show that using 30 dBZ at the -20 degrees C isotherm on cells within 75 km of the radar that have been tracked for at least 2 consecutive scan produces the best forecasts with a critical success index (CSI) of 0.71. The best VII predictor was 0.734 kg m-2 on cells within 75 km of the radar that have been tracked for at least 2 consecutive scans producing a CSI of 0.68. Results of this study further suggest that combining the radar reflectivity and VII methods can result in a more accurate lightning forecast than either method alone.Item Regression model ridership forecasts for Houston light rail(2012-12) Sides, Patton Christopher; Evans, Angela M.; McCray, TaliaThe 4-step process has been the standard procedure for transit forecasting for over 50 years. In recent decades, researchers have developed ridership forecasting regression models as alternatives to the costly and time consuming 4-step process. The model created by Lane, DiCarlantonio, and Usvyat in 2006 is among the most recent and most widely accepted. It includes station area demographics, central business district (CBD) employment, and the station areas’ built environments to estimate ridership. This report applies the Lane, DiCarlantonio, and Usvyat model to the North Line of Houston’s Metropolitan Transit Authority of Harris County (METRO). The report compares the 2030 ridership forecast created by METRO using the 4-step process with the LDU model forecasts. For the 2030 projections, this report obtained population and employment estimates from the Houston-Galveston Area Council and analyzed the data using Esri ArcMap and Caliper TRANSCadGIS software programs. The LDU model produced unrealistically high ridership numbers for the North Line. It estimated 108,430,481 daily boardings. METRO’s 4-step process predicted 29,900 daily boardings. The results suggest that the LDU model is not applicable to the Houston light rail system and is not a viable alternative to the 4-step process for this specific metropolitan area. The LDU method for defining Houston’s CBD was the main problem in applying the model. It calculated an extremely high CBD employment density compared to other cities of similar size. Even when the CBD size was manipulated to decrease employment density, the model still predicted 212,210 daily boardings for the North Line, nearly 10 times higher than METRO’s 4-step process estimate. In addition to the problems with the definition of the CBD, the creators of the LU model did not specifically explain how to define a metropolitan area. Multiple inconsistent and subjective definitions of a metro area can be used. This report employs three different definitions of the Houston metro, all of which produced three significantly different ridership forecasts in the LDU model. As a result of these flaws, the LDU model does not accurately apply to METRO’s North Line, and it does not serve as a viable alternative to METRO’s 4-step process.Item Statistical problem with measuring monetary policy with application to the current crisis(2010-05) Pappoe, Naakorkoi; Auerbach, Robert D.; Stolp, ChandlerThis report reviews the 2007 financial crisis and the actions of the Federal Reserve. The Full Employment Act of 1946 and the "Humphrey-Hawkins" Act guides the Fed's actions. These two laws outline the long-term goals of the monetary policy framework the Fed uses; however, the framework lacks principles for achieving the mandated long term goals such as reliable, complete data. This report looks at the use of model-based forecasting and gives recommendations for principles which will strengthen the preexisting monetary framework.Item The utility of total lightning observations in severe weather forecasting(2010-12) Burling, Christopher D.; Leary, Colleen; Wiens, Kyle C.; Wiess, Christopher C.A key aspect of short term weather forecasting is the ability to provide the public with adequate lead time in the event of severe weather. During severe weather events, it is vital that real time information such as radar data and spotter reports are available to aid National Weather Service (NWS) forecasters in the decision making process. An additional source of information that may prove useful in this regard is lightning data. Until very recently, forecasters have had access only to cloud-to-ground (CG) lightning data from the National Lightning Detection Network (NLDN). However, the utility of CG data as a severe weather forecasting tool is limited. Total (CG plus intracloud) lightning observations from very high frequency systems such as the Lightning Mapping Array (LMA; Rison et al. 1999) may be more useful. This thesis utilizes total lightning data from the Oklahoma LMA to assess the effectiveness of total lightning as an indicator of a given thunderstorm’s potential to produce severe weather. Specifically, a dataset of 52 thunderstorms (30 severe, 22 non-severe) within the domain of the Oklahoma LMA is analyzed to determine if severe weather is preceded by two features: a threshold total flash rate value which distinguishes severe thunderstorms from non-severe thunderstorms, and the presence of lightning jumps. A lightning jump algorithm was applied to each thunderstorm in the dataset in consideration of this second objective. Additionally, five thunderstorms are analyzed in greater detail to investigate these trends as they pertain to individual thunderstorms. A threshold flash rate upon which to determine thunderstorm severity is not apparent in the Oklahoma dataset. This is contrary to the results of a study of Florida thunderstorms by Williams et al. (1999), in which a clear threshold value was demonstrated. Lightning jumps are found to often precede the occurrence of severe weather, in good agreement with previous work.