Browsing by Author "Smith, Milton L."
Now showing 1 - 20 of 26
Results Per Page
Sort Options
Item A critical analysis of flow-shop sequencing(Texas Tech University, 1968-08) Smith, Milton L.Not availableItem A dynamic MAC-layer mechanism using individual mobility state recognition to increase mobile ad-hoc network throughput performance(2008-08) Phillips, Aaron Lee; Matis, Timothy I.; Kobza, John E.; Smith, Milton L.This research is based on the inner workings of the Data Link Layer's Media Access Control Sub-Layer. Moreover, it is concerned with creating two dynamic variables with the Request To Send/Clear To Send mechanism that resides in the previously stated sub-layer. This document proposes the creation of a dynamic Short Retry Limit and a dynamic Long Retry Limit. This work goes on to give both a statistical and scientific backing for deciding how the Short and Long Retry Limits should adjust themselves, and what levels they should adjust themselves to, in order to ensure improved network performance when compared to current IEEE default settings. Overall, this research presents a thorough introduction into a mobile ad-hoc network as well as an in-depth exploration of how the Media Access Control Sub-Layer performs its duties. It also addresses factors that affect performance such as fading environments and mobility. After all of the factors and inner workings have been elaborated on in great detail, the results presented in this paper will be concerning both how the dynamic levels were chosen as well as how the final dynamic Short and Long Retry Limits will be applied and methodically analyzed to understand the impact it had on improving the wireless mobile ad-hoc network performance.Item A STATISTICAL AND TAGUCHI PROCESS ANALYSIS AS APPLIED TO COTTON FIBER PROPERTIES AND WHITE SPECK OCCURRENCE(2010-12) Altintas, Pelin Z.; Beruvides, Mario G.; Simonton, James L.; Smith, Milton L.; Fedler, Clifford B.Cotton containing immature fibers is a major concern in the dyeing and finishing of textile products. In an un-dyed state, entangled fiber clusters are generically classified as neps. It is only after the application of dye, when some neps remain un-dyed, that the more specific classification of “white speck” is used. The High Volume Instrument (HVI) fiber property measurement system is important in marketing and general quality assessment of the cotton crop; however, HVI is not precise enough to address immature fiber content. The purpose of this research was to examine the relationship of Advanced Fiber Information System (AFIS) fiber properties to white speck counts of dyed yarn. This research looks at three sequential studies. First study looked at the within and between bale differences while establishing a regression model for white speck count and AFIS fiber properties of bale cotton and sliver cotton. Ten bales of cotton with a range of micronaire were sampled (10 samples per bale) and analyzed using AFIS with 3 replications of each counting 3,000 fibers. Each sample was then processed into yarn and dyed using the same procedure. White specks were quantified on dyed yarn using a white speck yarn counting method. Regression results indicated that fiber fineness, nep per gram, and immature fiber content found to be the influential indicators of white speck count in dyed yarn. However, small sample size and possible AFIS bias in the fineness and maturity measurements requires a larger sample size for further investigation. Second study analyzed the relationship between AFIS fiber properties and yarn white speck count by using statistical analysis. The treatments of harvest-aid chemical termination with varied harvest dates and two levels of field cleaning were included. Cotton samples of two crop years were sampled and analyzed using the AFIS with 3 replications, each counting 3,000 fibers. Each sample was processed into yarn and dyed with the same procedure. White speck counts on the yarn for each sample were conducted utilizing a white speck yarn methodology. The harvest date treatment influenced white speck count more than other fiber properties. The nep count by weight fiber property was also found to be one of the predictors of white speck count. However, the prediction model was not found to be as strong as the first study. Third study is the application of Taguchi method on the second study. Taguchi method is used to investigate the minimum white speck count in dyed yarn through fiber properties of varied harvest techniques. Signal-to-noise ratio (S/N ratio) was used to represent a response variable of white speck count. The smallest S/N ratio was chosen for this study. Among the control factors, harvest date, defoliation and field cleaner, the harvest date was found to be the significant effect on the S/N ratio of white speck count. The desirable outcome for white speck response was found to be early season harvesting and application of field cleaning and defoliation. By removing smaller, less mature bolls at early harvest date with field cleaner reduced the white speck count.Item A study of the costs of quality in a renewable resource environment(Texas Tech University, 2009-05) Banasik, Marcus; Beruvides, Mario G.; Marcy, William M.; Smith, Milton L.; Pasewark, William R.; Simonton, James L.This study provides an in depth literature review of the cost of quality (COQ) research, a COQ compendium, a comparison between the COQ for a manufacturing meta-analysis and 3 water utilities and a sensitivity analysis of the water utilities. These water utilities are El Paso, Lubbock, and San Antonio. The Manufacturing Meta-Analysis consisted of 38 useable studies which included a variety of industries and manufacturing processes. These were used to develop the comparison populations for the prevention, appraisal, failure, and total COQ variables against the water utilities. The water utilities were chosen because they represent three different populations, three difference water source combinations and three different county water usages. The three utilities together also represent 10% of the large water systems and approximately 11% of the population in Texas. Monthly financial data was collected from each water utility. Lubbock provided 33 observations, El Paso 45, and San Antonio 12 after the removal of outliers. This data was then categorized into the Prevention, Appraisal, and Failure (PAF) costs based on annual reports, budgets, utility input and the PAF Cost Compendium. The primary hypotheses of this research were to compare the COQ PAF and Total COQ ratios of the manufacturing meta-analysis with the water utilities to identify, determine the Juran Point for the water utilities and show that it is representative of a mature market, test the materiality of the Opportunity Costs, and determine the significance of the Environmental Opportunity Costs. The results of these hypotheses show that Prevention, Failure, and Total COQ are not the same for water utilities versus manufacturing companies. The Total COQ % for the water utilities was twice as large as the manufacturing companies. Interestingly, Appraisal costs were statistically the same between the water utilities and manufacturing utilities. Also, opportunity costs may or may not be material depending on the utility, but they were about $1M and $1-$2M for El Paso and San Antonio respectively. Hypotheses 2 and 4 were not able to be calculated from the information made available during the study. The sensitivity analysis also showed differences between the utilities. Population may have an impact on the % of Prevention costs. Appraisal costs were still equal across the water utilities. It also showed that wages were the single largest costs factor and that rainfall amounts have no effect on the costs of quality. The conclusions of this research are that differences do exist between water utility and manufacturing company COQ. Further research is needed to completely understand the causes of the differences. What is the reason that even with higher a high % prevention the water utilities have two times the % Total COQ? Are those differences due to risk aversion, public health, regulatory reasons, etc.? More research needs to be conducted to identify the Juran Point and Environmental Costs. The applications for this research is vast and includes over 3900 water utilities serving populations > 10,000, waste water utilities, energy utilities, and storm water utilities to name a few. This research is another tool that utilities could use to better allocate monetary resources to solve gaps in their funding.Item An alternative environmentally benign process for printed circuit board recycling(Texas Tech University, 2006-05) Ouyang, Xi; Zhang, Hong-Chao; Li, Guigen; Rivero, Iris V.; Smith, Milton L.; Collins, Terry R.In recent years there has been increasing concern about the growing volume of end-of-life electronic products. As the primary elements in most electronic products, Printed Circuit Board (PCB) is widely used and its recycling becomes a challenge not only to the industry, but also to the society. As an alternative environmentally benign method for PCB recycling, this dissertation adopts a novel processing method to separate the PCB scraps, which would increase the recycling rate and reduce the negative environment impact. Various solvent systems, i.e., carbon dioxide and water, are explored for delaminating PCB scraps at certain high temperature and pressure. For the purpose of finding an optimal condition for PCB delaminating, the experiment facilities were set up at the Advanced Manufacturing Lab (AML) in the Department of Industrial Engineering. Through series designed experiments, input parameters such as temperature, pressure and process time were recorded, and the output parameters, i.e., weight reduction, thickness expansion, impact energy variation were measured to evaluate the PCB delaminating results. Response Surface Method (RSM) was applied for selecting the input parameters. Multiple Objective Optimization method (MOO) was adopted to evaluate the overall delaminating effects. The utility theory was utilized to set up the utility function and figure out the optimal solution. Furthermore, the fundamental mechanism which caused the epoxy resin decomposing was interpreted. The explanation was helpful for selecting various solvents to speed up the reaction and improve the efficiency. In this research, the effectiveness of the alternative method to delaminate waste PCB scraps, that is, utilization of the chemical process was examined. Based on this method, series experiments were designed and implemented to search for an optimal condition. Through the output data analysis, optimization process conditions were determined. Consequently the reaction mechanism was interpreted. Some other solvent systems were tested to hasten the reaction speed, i.e., ternary solvent system of carbon dioxide, water, and ethanol. This alternative printed circuit board recycling process is promising for industrialization in the future.Item An alternative environmentally benign process for printed circuit board recycling(2006-05) Ouyang, Xi; Zhang, Hong-Chao; Collins, Terry R.; Li, Guigen; Rivero, Iris V.; Smith, Milton L.In recent years there has been increasing concern about the growing volume of end-of-life electronic products. As the primary elements in most electronic products, Printed Circuit Board (PCB) is widely used and its recycling becomes a challenge not only to the industry, but also to the society. As an alternative environmentally benign method for PCB recycling, this dissertation adopts a novel processing method to separate the PCB scraps, which would increase the recycling rate and reduce the negative environment impact. Various solvent systems, i.e., carbon dioxide and water, are explored for delaminating PCB scraps at certain high temperature and pressure. For the purpose of finding an optimal condition for PCB delaminating, the experiment facilities were set up at the Advanced Manufacturing Lab (AML) in the Department of Industrial Engineering. Through series designed experiments, input parameters such as temperature, pressure and process time were recorded, and the output parameters, i.e., weight reduction, thickness expansion, impact energy variation were measured to evaluate the PCB delaminating results. Response Surface Method (RSM) was applied for selecting the input parameters. Multiple Objective Optimization method (MOO) was adopted to evaluate the overall delaminating effects. The utility theory was utilized to set up the utility function and figure out the optimal solution. Furthermore, the fundamental mechanism which caused the epoxy resin decomposing was interpreted. The explanation was helpful for selecting various solvents to speed up the reaction and improve the efficiency. In this research, the effectiveness of the alternative method to delaminate waste PCB scraps, that is, utilization of the chemical process was examined. Based on this method, series experiments were designed and implemented to search for an optimal condition. Through the output data analysis, optimization process conditions were determined. Consequently the reaction mechanism was interpreted. Some other solvent systems were tested to hasten the reaction speed, i.e., ternary solvent system of carbon dioxide, water, and ethanol. This alternative printed circuit board recycling process is promising for industrialization in the future.Item Branch-and-cut for cardinality optimization(2010-12) Sikka, Ankit; Farias, Ismael R. d.; Kobza, John E.; Smith, Milton L.A cardinality constrained linear programming problem (CCLP) is a linear programming problem with an additional constraint which restricts the number of nonnegative variables that may take on positive values. This problem arises in a large variety of applications, such as portfolio optimization, compressive sensing, metabolic engineering, and maximum feasible subsystem. In this thesis we review the branch-and-cut approach for CCLP, and we focus on the polyhedral approach to it. We rst present branching strategies to solve this model through branch-and-cut. To set the stage for important results on CCLP, we give some important results on the cardinality constrained knapsack polytope (CCKP). We then determine when the trivial inequalities de ne facets of CCKP. Finally, we discuss the nontrivial inequalities, which can be used as cuts in a branch-and-cut scheme.Item Control chart for complex systems with trended mean and non-constant variance(2012-05) Ramirez, Jose G; Beruvides, Mario G.; Temblador-Perez, Maria del Carmen; Smith, Milton L.; Limon-Robles, Jorge; Cordero-Franco, Alvaro E.This research focuses on the monitoring of complex systems. Specifically, the main objective is to define a technique to monitor a quadratic behavior when the standard deviation is linearly trended. A three-paper format is chosen for this Dissertation. The first paper shows the mathematical model that the data follows and presents the first approach for a control chart where time series analysis (with an autoregressive approach to identify the parameters of the quadratic behavior) is used to model the central line and the control limits are established considering traditional control charting theory. A correction factor was identified as necessary to provide adequate results and the control chart is able to detect almost all signals; a numerical example is provided. The second paper uses the same principles as the first one but uses the likelihood function to identify the parameters of the quadratic behavior and, as a result, the central line is again estimated. Results show the control limits are smoother in comparison with the first approach; the control chart seems to provide even better results. The third paper performs extensive Monte Carlo simulation to determine the performance of the proposed approaches and to compare them with an equivalent method: the regression control chart (RCC). Results show both LSE and MLE perform well for larger shifts by detecting most signals and controlling the Type-I error.Item Enrollment management optimization using operations research(2008-12) Delgado-Coutino, Carlos B.; Kobza, John E.; Matis, Timothy I.; Smith, Milton L.Texas Tech University has devoted additional resources to recruitment efforts to help address the declines in applications and yield. However, to best utilize these resources a model-based approach is needed to allow decision makers to meet institutional goals by managing the recruitment and enrollment of students at Texas Tech or any educational institution. The proposed modeling approach is goal programming. The admissions goal planning model developed in this research is a powerful decision-making tool to optimize university admissions planning and to set a rational basis for the admissions policy based on institutional enrollment goals. It is used to plan admissions for the coming year with the intent to increase the effectiveness of recruitment efforts. The model is a single-period preemptive integer goal programming model that manages student flows. The decision variables are the number of freshman students admitted (admittance levels) for each student category and the goals are those contained in the Texas Tech University 2005 Strategic Plan. At the beginning of each academic year admissions and enrollment management personnel can program their recruitment efforts based on the admissions policy for the coming year. The model developed provides the best solution possible subject to the constraints, goals and priority structure established. The solution satisfies the Texas Tech Strategic Plan goals to the best possible extent. The admissions policy is not sensitive to changes in the priority structure and additional effors by the admissions officers should be focused on the goal levels.Item Flight Gate Assignment and Proactive Flight Gate Reassignment Optimization for Hub and Spoke Airline Operations(2010-12) Maharjan, B; Matis, Timothy I.; Kobza, John E.; Smith, Milton L.; Simonton, James L.; Bremer, Ronald H.The flight gate assignment problem is encountered by gate managers at an airport on a periodic basis. This assignment should be made in such as way so as to balance the perspectives of the airline and customer simultaneously, while providing buffers for disrupting unexpected events. In this dissertation, a binary integer multicommodity gate flow network model is presented for finding the optimum flight-gate assignment with the objective of both minimizing the fuel burn cost of aircraft taxi by type and the expected walking distance of connecting passengers magnified by time windows. While this network formulation is efficient, a heuristic approach of grouping gates into zones and sub-zones is developed for large-problem instances in which non-polynomial complexity becomes prohibitive. This formulation and heuristic application is demonstrated for the gating of scheduled flights of Continental Airlines at George W. Bush Intercontinental Airport in Houston (IAH). Reassignments of flights occur when scheduled flight gate assignments are disrupted, causing flight gate conflicts due to flight delays. Flight delays are caused by a host of problems, such as inclement weather, tardy crews, mechanical problems, tardy passengers, airport security issues, airport congestion, delay propagation between airports, etc. In this dissertation, a Binary Integer Program is formulated for the optimal reassignment of planes to gates in response to day-of flight delays. This program minimizes the total walking distance of those connecting and originating passengers whose boarding passes for reassigned flights were issued prior to the gate reassignments, which can cause passenger disruption at the airport. A numerical illustration is shown for actual operations of Continental Airlines at George W. Bush Intercontinental Airport to exhibit the speed and efficiency of the model.Item Influence of intellectual capital intermediaries on technical workforce capacity(Texas Tech University, 2008-12) Maldonado, Cesar; Beruvides, Mario G.; Smith, Milton L.; Holden, Orbry; Simonton, James L.America’s promotion of science and its appreciation of the economic benefit derived from an educated workforce have fueled innovation that has made America a global economic power. This dependence on a skilled workforce requires that America constantly upgrade its technology and rejuvenate education processes with updated content. The charge of enhancing the nation’s knowledge-base rests mostly on state-controlled educational systems. For Texas the development of workforce skills is divided among many educational organizations, most of which operate under one of two different state agencies. Although these organizations work towards the common goal of an educated and well trained workforce, they have diverse stakeholders and serve different populations. The largeness and hierarchical structure of public education organizations coupled with their diverse constituents create functional constraints found in similarly structured private-sector organizations. Such large, vertically integrated organizations develop informational latency that fosters distortion in scaling of successfully piloted reform. An option for overcoming the latency across education domains and the workforce domain is use of an organizational structure proven successful in multi-organizational projects and joint ventures, the matrix organization. Since the framework of this structure creates a horizontal cross-functional configuration, independent management and domain neutrality are critical for its success. The current work investigated the effectiveness of an independent intermediary in guiding reforms in an education-workforce, multi-domain system. The intermediary was structured in a project-like matrix organization and coordinated activities between the public education domain, higher education domain, and the workforce domain. The intermediary was guided by a private-sector led board of directors who were elected by regional public and higher education stakeholders. The intermediary was independent from the three domains yet operated within their respective functional framework. This provided the intermediary autonomy but allowed it to be conversant with the region’s educational programs and workforce needs to effectively design, scale, and implement curriculum alignment programs. Limiting the intermediary to a regional scope achieved social and economic commensurability between the stakeholders. This moderate span also made goals tangible to the stakeholders, program progress assessable, and spawned a culture of cooperation and trust. The results of the intermediary’s work were measured through its affect on “workforce capacity”, a function of the cumulative intellectual capital created by the education system and its alignment with workforce needs. The hypothesis was that a properly structured educational intermediary would provide a unifying organizational framework through which workforce capacity could be increased. This was tested with the cumulative results of three sub-hypotheses regarding two subpopulations, students enrolled in an intermediary supported program (TP cohort) and those that were not enrolled in the program (NTP cohort). The results show statistically significant improvements in high school completion rates, 93.5% versus 82.9%, and in transition to 2-year post-secondary institutions, 23.6% versus 16.8%, for the TP cohort over the NTP cohort, respectively. The results also show that transition rates to 4-year post secondary institutions for the TP cohort, 39.2%, were not statistically different from the NTP cohort, 38.2%. The cumulative influence of the intermediary was a net increase in technical workforce capacity as defined in the current work.Item Lean in hospitality services across State University(2011-05) Velusamy, Senthilkumar; Simonton, James L.; Farris, Jennifer; Smith, Milton L.Food Service industry is one of the largest employer in the United States and food service in Universities form a considerable part of this. Food service places always face the difficult task of providing high quality of food and at the same time reduce wastes and costs involved with food production. Lean principles and lean tools help in reducing wastes and thereby reducing costs. Lean principles have been widely applied in the manufacturing sector and rarely used in the service industry. This research applies lean principles in reducing wastes and improving productivity of a food service operation in a university setting. The results showed considerable reduction in costs and improvement in productivity while keeping the quality of food at the highest level.Item Mitigating the stochastic effects of fading in mobile wireless ad-hoc networks(Texas Tech University, 2007-12) Guardiola, Ivan G.; Matis, Timothy I.; Bartolacci, Mike; Kobza, John E.; Smith, Milton L.; Collins, Terry R.This research considers the impact of fast fading effects on the discovery and maintenance of communication routes in mobile wireless ad-hoc networks. Moreover, it is a upon this consideration that fast fading directly impacts the performance of the underlying communication protocols for such networks. It is illustrated herein that protocol design should be based upon the consideration of the operating environments to which such networks are deployed. This research provides a statistical interpretation of link quality based on the instantaneous received power under various multi-path fading models, for which associated Type 1 and 2 errors are defined. Based on this viewpoint, this document proposes the embedding GPS information into a protocol in order to block the inclusion of unreliable links within the route of communication. This implementation results in a dramatic enhancement of the end-to-end performance of the mobile wireless ad-hoc communication network. Thus, this dissertation presents a general introduction to wireless communication networks and their inherent issues, and elaborates in detail this new statistical interpretation of fast fading and the results obtained from employing GPS information to realize an efficient protocol design methodology.Item Model for improving the logistics processes for propane delivery(2008-08) Santithammarak, Vanlapha; Smith, Milton L.; Simonton, James L.; Kobza, John E.The scheduling problem of service times and the method to provide service to customers by minimizing time, cost, and distance are important for service providers and are challenging research topics. This research problem concerns the scheduling of the delivery of propane to a large number of customers in a rural area. Prediction of customer demands must be done before service times are scheduled. In this research, there are two types of customer demands; one is regular customer demands and the other is call-in customer demands. This problem is similar to a situation having regular tasks and emergency tasks that come up to interrupt the regular tasks. The regular customer demands are scheduled and the regular service is assumed to be the routine every day. The data collection of regular customer demands are observed from historical demands of regular customers and separated by season as summer and winter. Afterwards, the proper mean (ë) number of call-in customers per day is calculated, and it is established that the Poisson distribution applies. Subsequently the call-in services are scheduled after these customers call for service. The challenge of this research is that each customer service time is scheduled in the time limit per day. In addition, the locations of regular customers are investigated before grouping regular customers together to provide service on the same day. This research focuses on the method to integrate the schedule containing regular customers with the call-in customers in each day.Item Modeling agricultural recycling systems for system size and economic potential(Texas Tech University, 2007-12) Hanson, Jeffrey Leland; Beruvides, Mario G.; Simonton, James L.; Fedler, Clifford B.; Smith, Milton L.Water is one of the most valued natural resources, and its availability for consumption varies considerably within any region. Recycling water and biomass through reuse systems can help preserve finite available resources for future generations. An approach to achieve this goal is through modular production systems. These types of systems can be developed to utilize recycling technologies to preserve resources and enhance economic development as well as the quality of life, particularly for rural areas. Livestock waste, aquatic plants, fish, and feed for livestock might seem to be unrelated at first glance, but when combined in a modular production approach they become an effective approach to recycling water and biomass along with developing rural economics. A modular production system concept is applied, using existing scientific knowledge and technologies, to connect these components into a self-sustaining, environmentally-friendly system. In this study the modular production system is composed of growing aquatic plants from livestock waste, feeding pond fish with a portion of these aquatic plants, then harvesting the plants and the fish to mix with agricultural biomass (cotton waste) to form a high protein content feed ration. The feed ration will then be used for livestock feed and the purified wastewater from the system will be reused to initiate the cycle once again. The objective of this study is to investigate the viability of these modular systems for real life application. The various scenarios and the cost feasibility of a modular production system are examined. Economic analyses and modeling are conducted on the estimated costs of raw material, equipment, labor, and transportation. The resulting model shows that the modular systems approach is a promising alternative solution to economic, environmental, and social issues for rural areas, although the specific system modeled is not itself economically feasible.Item On minimizing expected warranty costs in 1-dimension and 2-dimension with different repair options(Texas Tech University, 2008-05) Jayaraman, Raja; Matis, Timothy I.; Kobza, John E.; Smith, Milton L.; Simonton, James L.We study warranty models of one-dimension and two-dimensions with different options available to the manufacturer for repair-replacement upon product failure. Product failures during warranty period incur additional costs to the manufacturer and leads to customer dissatisfaction when suitable service action is not adopted. To forecast and minimize the expected cost of warranty servicing is of considerable interest to product manufacturers and decision makers. The results of this research will enable manufacturers, reliability engineers and warranty managers to make better decisions in developing suitable servicing strategies and towards significant cost savings in administering effective management of warranty programs. In one-dimensional model we study a non renewing combination warranty policy with initial base-warranty period (BWP) followed by pro-rata period (PRP). Product failures during BWP incur no cost to the customer, while the customer can purchase a new product at pro-rata price during PRP period. The manufacturer has three options available during BWP, namely minimal repair, general repair and replacement to address product failures and each option varies based on degree of repair, and the cost associated. We obtain optimal product price, pro-rata period, and sales volume for each type of repair replacement option employed, derive stationary points and necessary second conditions to minimize the manufacturers cost of warranty servicing. We numerically illustrate the results obtained when the lifetime distribution of the product follows: (i) Gamma order-2, and (ii) extended weibull distribution. In two-dimensional model we study a non- renewing, combination warranty policy defined by rectangular region, which is composed of three disjoint subregions. Product failures in each subregion incur variable costs based on the usage, age of the product, and repair strategy adopted. We assume pro-rated costs for servicing which is dependent on the age and the usage of the product at the time of failure. We consider the warranty duration limits set by the manufacturer for restricted and unrestricted product usage and compare the effects of two servicing strategies based on when the manufacturer can exercise replacement option. Using an estimated failure intensity and usage distribution we derive expressions for expected costs of warranty servicing based on conditioning arguments. We numerically compare the effect of restricted and unrestricted cases for both the servicing strategies to obtain minimal expected costs to the manufacturer. Finally, we conclude with a brief discussion about extensions to the models developed and future research directions.Item On the application of non-zero-sloping analysis to complex systems: a case study involving mutual funds(Texas Tech University, 2009-05) McGrath, Daniel Andrew; Beruvides, Mario G.; Smith, Milton L.; Simonton, James L.; Zartman, Richard E.Given that a data series shows a trend (either growth or decay), charting methods can be used as a tool to assess control status. Complex systems display emergent properties that differ from those of the components and pose a measurement challenge as inputs typically cannot be quantified. Data from 15 mutual funds that demonstrated a strong growth trend were used as typical data from a complex system formed by aggregation. Two methods were found as candidates for comparison after an extensive literature search. The 2-part control method involved a Classic Shewhart control chart of individual data via a surveillance approach that used σ from the population to establish control limits. Only the 3σ alarm rule was investigated. Linear regression was performed followed by a detailed assessment of residuals. The test method plotted the residuals from the regression onto a control chart after normalization to a standard normal form with conservative control limits. This output was named the Regression Analysis Standardized Residuals (RASR) chart. Robustness to missing data points, autocorrelation, and non-normal conditions was explored using the RASR chart. The RASR chart was superior to the control method. A 3-part chart (I-RASR) was demonstrated that allows tacit understanding of trending data in the natural units of measurement; an overlay of the best-fitted trend line with the goodness-of-fit displayed (as r2); and the control status of the data relative to the fitted line expressed in terms of standard deviation (RASR chart) that allows a probabilistic interpretation relative to the operational definition in effect. The main benefits of the I-RASR chart result from the independence and identical distribution of the resulting data and are robustness to slope, sets control limits realistically, allows runs to be viewed accurately, and provides a true comparison of the data to the a priori assumption of normality.Item Optimal return strategy for a unique nonrandom unit competing with random arrivals(Texas Tech University, 1966-05) Smith, Milton L.Not availableItem Predicting required maintenance and repair funding based on standard facility data elements(Texas Tech University, 2007-05) Tolk, Janice N.; Collins, Terry R.; Simonton, James L.; Smith, Milton L.; Matis, Timothy I.Government entities and educational institutions have billions of dollars invested in facility portfolios designed to supply services to those that they support. Maintaining these portfolios requires continuous investment to keep them viable in order to meet their intended mision. In the past fifteen years, owners of these portfolios have realized that the facilities have degraded to the point that they may not be usable, they may require a significant investment to return them to full service, and they require a continuous financial commitment to maintain them. Both government and educational institution managers have realized that they have allowed this situation to occur due to chronic underinvestment in annual maintenance. Now they are faced with a large backlog of deferred maintenance and potential loss of mission. This research investigates the underlying cause of chronic underfunding of the annual maintenance and repair of large facility portfolios, reviews the related literature for existing methods for estimating annual maintenance and repair funding, and develops a model that can be used by a facility portfolio manager based on facilty attributes commonly found in a condition assessment program. In addition, the research determines the effect on the developed model from varying facility portfolio size and facility model types, and compares the developed model to three models most often cited in the related literature. Using multiple regression analysis, a prediction equation has been derived for the research portfolio, and is found to have good correlation to one of the models cited in the literature. It does not have good correlation to two of the models cited in the literature. Further, the research found that "fine tuning" a prediction equation to a specific facility portfolio yields the best results, although a more generic model is useful for an order of magnitude estimate.Item Propane demand modeling for residential sectors- A regression analysis(2011-05) Shenoy, Nitin K.; Smith, Milton L.; Kobza, John E.; Simonton, James L.This thesis presents a forecasting model for the propane consumption within the residential sector. In this research we explore the dynamic behavior of different variables that affect the propane consumption and develop a forecasting model. The significant factors that had an impact on the propane consumption in houses were heating degree days of that area, wind speed, precipitation and the size of the houses. However in case of mobile homes only the heating degree days had significance. The behavior of the customers was assumed to be static. This model is based on multiple regression methods. The data was collected from a local propane company in West Texas. Different combinations of months were used in this model to study the propane consumption behavior for each month. These different studies were used to generate the final forecasting model. As propane consumption was low for the months from June to September, the best results were obtained when the data for the months from October through May was used for analysis. The results indicate that the forecasting model provides a potentially useful forecast.