Browsing by Subject "Efficiency"
Now showing 1 - 20 of 29
Results Per Page
Sort Options
Item A novel approach to soft-switching power converters(2016-05) Engelkemeir, Frederick Donald; Hallock, Gary; Valvano, Jonathan; Baldick, Ross; Grady, W. Mack; Gattozzi, AngeloModern power converters operate using PWM (Pulse Width Modulation) and switching their semiconductor switches on and off at a very high rate to achieve a high efficiency. While the losses of the switches are very low in either the on or the off state, the transition times give rise to switching losses, which increase with increasing switching frequency. Soft-switching techniques aim to eliminate this by forcing a zero-voltage or a zero-current condition on the switch during a switching cycle such that the switching losses are eliminated. This is very important to improve the efficiency of power electronics in light of ever-increasing demands to conserve energy and to also allow the power electronics to be made smaller and more compact. While soft-switching has been successfully applied for simpler applications such as DC-DC converters, it has been difficult to apply to general-purpose inverters (such as to drive AC motors). The ARCP (Auxiliary Resonant Commutated Pole) is a topology that is one of the most promising approaches to soft-switching an inverter; however, it has some drawbacks. During the course of my research, I have devised two alternative topologies which aim to address the ARCP's limitations. In addition, I have developed several novel control schemes to more efficiently and reliably gate these ARCP inverters with a minimum of sensor feedback. I have developed my ARCP technology on Simulink models and on a 20kW experimental prototype. I have also implemented and tested some of the control scheme improvements on a 2MW ARCP machine at CEM (UT Center for Electromechanics), which is still the largest ARCP converter ever built. In the process I have also developed techniques to accurately measure the efficiency of high efficiency power converters.Item A physiological investigation of performance rating for repetitive type sedentary work(Texas Tech University, 1963-06) Manuel, Robert RalphTime study as generally employed is a technique for measuring the time required by a trained operator to perform a specific amount of production under standardized working conditions. Historically, time study was first introduced by the efforts of Frederick W, Taylor in I88I during his studies at the Midvale Steel Company in Philadelphia. From the time of Taylor*s contribution to the present, the practice of time study has steadily Improved "... until today it is recognized as a necessary tool for the effective operation of business or industry."Item An analysis of ways to maximize the efficiency of the NEPA environmental process at the Texas Department of Housing and Community Affairs(2010-05) Ramphul, Ryan Christian; Paterson, Robert G.; Mueller, Elizabeth J.In light of the substantial sums of money that the Texas Department of Housing and Community Affairs (TDHCA) was awarded through the American Recovery and Reinvestment Act of 2009, ways to maximize the efficiency of the agency’s various processes are highly sought after. The TDHCA environmental review process, which is required by the National Environmental Policy Act (NEPA), is one of the longest processes that people applying for federal funding through TDHCA must face. It is, therefore, a process that would substantially benefit the agency by being made more efficient. In this report, areas where applicants find the TDHCA environmental process to be difficult are illustrated by a systematic tabulation of the deficiency reviews sent to a sample of applicants from 2009. Additionally, survey data collected from people who submit environmental applications, and also people who review environmental applications, provides quantitative data about specific areas of the process where applicants meet with difficulty; and also qualitative data about where survey-takers feel the process could be made easier and more efficient. The data seems to indicate that applicants have significant difficulty knowing how to start the environmental process, the documents necessary, and how to fill out the necessary documents. In terms of suggestions, the results indicate that a more elaborate, user-friendly environmental webpage, complete with examples of required documents, and examples of how to fill them out, would make the environmental process exponentially easier for applicants. With the process being easier for applicants, TDHCA Environmental Specialists will hopefully not need to send out as many deficiency reviews to applicants, and will instead be able to review applications faster and issue environmental clearance quicker; thus making the process more efficient.Item An assessment of operations and maintenance costs in public-private partnerships(2014-08) Martinez, Sergio Eduardo; Walton, C. Michael; Murphy, Michael RossPublic-private partnerships (PPPs) for the delivery of transportation infrastructure are said to offer increased efficiency resulting from the private sector’s life-cycle approach to design and construction. While the literature on PPPs endorses such efficiencies, studies don’t provide empirical support for that claim. The goal of this thesis was to assess that notion. Four tasks were carried out to explore that issue. First, a literature review searched for evidence of such efficiencies and methodologies to evaluate them. Second, a simple methodology to evaluate the life-cycle cost-efficiencies of the public and private sectors was proposed. Third, since most PPP projects in the U.S. are recent and currently subject to routine operations and maintenance (O&M), indicators to compare those costs were proposed as well. Fourth, a case study compared the routine O&M costs of a PPP and of those of a system of publicly developed and managed tollroads. The literature review found no empirical evidence of superior O&M cost-efficiency of PPPs, and also, that most studies focused on design and construction cost and schedule overruns. While some studies assessing performance and/or efficiency were at times theoretical and not likely employed in practice, one methodology is proposed to evaluate life-cycle cost-efficiency. The case study results showed that the concessionaire was more cost-efficient in terms of operating expenditures (OPEX) per mile (-60%) and per lane-mile (-53%) than the system. The public system was more cost-efficient in OPEX per vehicle-miles travelled (97%), number of toll transactions (332%), and toll revenue (20%). However, those three indicators depend on traffic volume which during the study period was overwhelmingly greater on the public system. While the case study showed cost-efficiency differences between the public and private sectors, additional research is needed to empirically test the hypothesis of greater efficiency of the private sector. The proposed framework can be used, but adequate data and further assumptions about O&M costs are needed; for that, it is recommended that more comprehensive case studies be performed to obtain detailed empirical data. A better understanding of the differences in cost-efficiency between publicly and privately managed roads will help decision-makers minimize the life-cycle cost of their investments.Item Comparative study of the end-to-end compliant TCP protocols for wireless networks(2005-12) Todorovic, Milan; Lopez-Benitez, Noe; Andersen, Per H.The inability of the traditional TCP protocols to recognize the non-congestion related packet loss and the efficiency ramifications that might have on the quality of the communication in the wireless and mixed networks must not be ignored. The appropriate modifications have to be made to compensate for this shortcoming. The recently proposed solutions, Freeze-TCP, TCP-Probing, TCP Westwood/Westwood+, TCP Veno, TCP-Jersey, and JTCP, all present a considerable improvement over the traditional TCP protocols that treat all packet loss as a sign of congestion. But, to make sure which of these solutions is the best choice and worthy of possibly being adopted as a future standard, we must compare these solutions to each other. Such comparison has not been done, but would offer a significant insight into the effectiveness of different mechanisms in these solutions. The underlying idea is to test the proposed protocols in various network layouts under different circumstances with differing external interferences in an attempt to most accurately simulate the real-life scenarios. The ultimate goal is to isolate the most efficient solution to the non-congestion packet loss problem of the TCP protocol in wireless networks. If no solution, however, yields itself an absolute winner, it is the secondary goal of this research to identify the most efficient mechanisms from these solutions and, if possible, propose a hybrid solution that would include all the advantages of the protocols presented. The information about the performance of these protocols was obtained as the results of the ns-2 simulations. However, only TCP Westwood, TCP-Jersey, TCP Veno, and JTCP were tested as they are the only ones implemented in the ns-2 simulator by their designers. Simulations were designed to test the protocols in non-congested environments, UDP congested environments and the environments where all of the protocols are competing for the bandwidth. The performance of the protocols was measured based on three benchmark parameters: throughput, average congestion window, and time to complete a file transfer. According to the simulation results, a small group of protocols appeared on top of the leader board in all simulations. TCP Westwood and JTCP outperformed their competition under the random packet loss in both burst and long flow testing, with the realization that the performance of TCP Westwood was much better in LAN than in WAN topologies. JTCP displayed remarkable performance in all environments under both random and disconnection packet loss but showed a significant drop in throughput when competing with other TCP flows. Under disconnection loss we saw two protocols dominating: TCP SACK and once again JTCP. TCP Westwood posted average results in disconnection loss simulations. Based on the nature of the tested environments we concluded that the dynamic bandwidth estimation algorithm, a force behind TCP Westwood's congestion modification, proved to be the most efficient mechanism in networks facing random packets loss. During disconnection loss, when multiple successive packets within the same window are lost, the SACK option surfaced as the most adept mechanism. The jitter ratio based mechanism for distinguishing between the congestion and non-congestion packet loss was what allowed JTCP to perform well in the tests. According to these findings, a hybrid was proposed, TCP Westwood-JSACK, a protocol that uses TCP Westwood's congestion window modification algorithm, JTCP's mechanism for identifying the cause of a packet loss, and TCP SACK's efficient method for recovering from heavy continuous packet loss.Item Comparative study of the end-to-end compliant TCP protocols for wireless networks(Texas Tech University, 2005-12) Todorovic, Milan; Lopez-Benitez, Noe; Andersen, Per H.The inability of the traditional TCP protocols to recognize the non-congestion related packet loss and the efficiency ramifications that might have on the quality of the communication in the wireless and mixed networks must not be ignored. The appropriate modifications have to be made to compensate for this shortcoming. The recently proposed solutions, Freeze-TCP, TCP-Probing, TCP Westwood/Westwood+, TCP Veno, TCP-Jersey, and JTCP, all present a considerable improvement over the traditional TCP protocols that treat all packet loss as a sign of congestion. But, to make sure which of these solutions is the best choice and worthy of possibly being adopted as a future standard, we must compare these solutions to each other. Such comparison has not been done, but would offer a significant insight into the effectiveness of different mechanisms in these solutions. The underlying idea is to test the proposed protocols in various network layouts under different circumstances with differing external interferences in an attempt to most accurately simulate the real-life scenarios. The ultimate goal is to isolate the most efficient solution to the non-congestion packet loss problem of the TCP protocol in wireless networks. If no solution, however, yields itself an absolute winner, it is the secondary goal of this research to identify the most efficient mechanisms from these solutions and, if possible, propose a hybrid solution that would include all the advantages of the protocols presented. The information about the performance of these protocols was obtained as the results of the ns-2 simulations. However, only TCP Westwood, TCP-Jersey, TCP Veno, and JTCP were tested as they are the only ones implemented in the ns-2 simulator by their designers. Simulations were designed to test the protocols in non-congested environments, UDP congested environments and the environments where all of the protocols are competing for the bandwidth. The performance of the protocols was measured based on three benchmark parameters: throughput, average congestion window, and time to complete a file transfer. According to the simulation results, a small group of protocols appeared on top of the leader board in all simulations. TCP Westwood and JTCP outperformed their competition under the random packet loss in both burst and long flow testing, with the realization that the performance of TCP Westwood was much better in LAN than in WAN topologies. JTCP displayed remarkable performance in all environments under both random and disconnection packet loss but showed a significant drop in throughput when competing with other TCP flows. Under disconnection loss we saw two protocols dominating: TCP SACK and once again JTCP. TCP Westwood posted average results in disconnection loss simulations. Based on the nature of the tested environments we concluded that the dynamic bandwidth estimation algorithm, a force behind TCP Westwood's congestion modification, proved to be the most efficient mechanism in networks facing random packets loss. During disconnection loss, when multiple successive packets within the same window are lost, the SACK option surfaced as the most adept mechanism. The jitter ratio based mechanism for distinguishing between the congestion and non-congestion packet loss was what allowed JTCP to perform well in the tests. According to these findings, a hybrid was proposed, TCP Westwood-JSACK, a protocol that uses TCP Westwood's congestion window modification algorithm, JTCP's mechanism for identifying the cause of a packet loss, and TCP SACK's efficient method for recovering from heavy continuous packet loss.Item Computer tools for designing self-sufficient military base camps(2012-08) Putnam, Nathan Hassan; Seepersad, Carolyn C.; Webber, Michael E., 1971-; Campbell, Matthew; Morton, David; Novoselac, AtilaMilitary Forward Operating Base Camps (FOBs) support and enable sustained military operations abroad by providing safe locations for soldiers and supporting contractors to eat, sleep, and maintain personal hygiene. FOBs need some amount of energy and water to provide these services but are often located in austere environments that do not have access to grid utilities. Off-grid FOBs are not self-sufficient; they are dependent on supply chains for the services they provide to camp occupants. The challenge of supplying FOBs with fuel and water and removing waste (resource resupply and waste removal comprise logistical requirements) is associated with very high human, monetary, strategic, and environmental costs. There are many research efforts across the U.S. Department of Defense (DoD) that seek to reduce FOB logistical requirements, but it is currently very difficult to identify the research efforts that are most beneficial to DoD goals. There are also many factors that make designing FOBs to be more self-sufficient challenging including varying missions, environments, and legacy equipment at currently-fielded FOBs, a lack of baseline data on FOB logistical requirements, an unclear relationship between design changes and resource use behavior, and an unclear valuation of saved resources. This research seeks to develop computer tools and contribute to a methodology that can be used to design FOBs that are more self-sufficient. More self-sufficient FOBs provide high quality services to occupants but do so with mitigated logistical requirements. To this end, a detailed computer model of specific type of FOB (a single 150-person Force Provider module) is developed, and baseline levels of resource requirements are established. Potentially resource-saving devices and other design changes are incorporated into the FOB model and simulated to assess each design change's effect on resource use and waste production. Then, estimated resource savings are weighed against required investment for each design change to arrive at design recommendations. The results of this research effort are specific design recommendations for making the Force Provider system more self-sufficient, as well as computer tools and a methodology that are applicable to other off-grid habitation redesign problems.Item Determination of Energy Efficiency of Beef Cows under Grazing Conditions Using a Mechanistic Model and the Evaluation of a Slow-Release Urea Product for Finishing Beef Cattle(2012-02-14) Bourg, Brandi MarieThe cow/calf phase of production represents a large expense in the production of beef, and efficient beef cows use fewer resources to obtain the same outcome in a sustainable environment. The objective of study 1 was to utilize a mechanistic nutrition model to estimate metabolizable energy requirement (MER) of grazing cows based on changes in cow body weight (BW) and fatness measurements (body condition score, BCS) along with calf age and BW, as well as forage quality and quantity. In addition, an energy efficiency index (EEI), computed as MER of the cow and calf divided by calf weaning BW, was used to rank cows within a herd based on their efficiency of utilizing available forage to meet their maintenance requirements and support calf growth. Data were collected from one herd of approximately 140 Santa Gertrudis cows over a four-year period, and analyzed per calving cycle, conception to weaning. The model's estimation of EEI appears to be moderately heritable and repeatable across years, and efficient cows might have greater peak milk and be leaner. In typical feedlot diets, the rates of ruminal fermentation of highly processed grains and the hydrolysis rate of urea may not match. Asynchronous utilization of carbohydrate and protein would result in some portion of the urea unknot being utilized by the ruminal microbes and ultimately the animal. The use of slow-release urea (SRU) products offers a unique opportunity to synchronize ruminal fermentation of carbohydrate with non-protein nitrogen (NPN) release rate. Two experiments were conducted to examine the impact of source, urea or SRU, and level of dietary NPN on 1) performance and carcass characteristics and 2) N balance of finishing cattle. Steers had lower initial F:G when SRU was used as the only source of feed N (treatment 3), suggesting that SRU may replace both NPN and true protein feeds in finishing cattle diets. High levels of either NPN source had greater N intake and urinary N excretion, as well as N absorption and no major differences were observed between SRU and urea, suggesting that SRU can replace urea at different levels of N intake.Item Determination of fission product yields of 235U using gamma ray spectroscopy(2012-12) Lu, Christopher Hing; Biegalski, Steven R.; Landsberger, SheldonIt is important to have a method of experimentally calculating fission product yields. Statistical calculations and simulations produce very large uncertainties. Experimental calculations, depending on the methods used, tend to produce lower uncertainties. This work set up a method to calculate fission product yields using gamma ray spectroscopy. In order to produce a method that was theoretically sound, a simulation was set up using OrigenArp to calculate theoretical concentrations of fission products from the irradiation of natural uranium. From these concentrations, the fission product yields were calculated to verify that they would agree with expected values. Moving forward in the work, the total flux at the point of irradiation, in the pneumatic transfer system, was calculated and determined to be 3.9070E+11 ± 6.9570E+10 n/cm^2/s at 100 kW. Once the flux was calculated, the method for calculating fission product yields was implemented and yields were calculated for 10 fission products. The yields calculated were in very good agreement (within 10.04%) with expected values taken from the ENDF-349 library. This method has strong potential in nuclear forensics as it can provide a means for developing a library of experimentally-determined fission product yields, as well as rapid post-nuclear detonation analysis.Item Development of a fluorescence model for the determination of constants associated with binding, quenching, and FRET efficiency and development of an immobilized FRET-peptide sensor for metal ion detection(2012-08) Casciato, Shelly Lynn, 1984-; Holcombe, James A.; Liljestrand, Howard M.This thesis presents a modeling program to obtain equilibrium information for a fluorescent system. Determining accurate dissociation constants for equilibrium processes involving a fluorescent mechanism can prove to be quite challenging. Typically, titration curves and non-linear least squares fitting of the data using computer programs are employed to obtain such constants. However, these approaches only consider the total fluorescence signal and often ignore other energy transfer processes within the system. The current model considers the impact on fluorescence from equilibrium binding (viz., metal-ligand, ligand-substrate, etc.), quenching and resonance energy transfer. This model should provide more accurate binding constants as well as insights into other photonic processes. The equations developed for this model are discussed and are fit to experimental data from titrimetric experiments. Since the experimental data are generally in excess of the number of parameters that are needed to define the system, fitting is operated in an overdetermined mode and employs error minimization (either absolute or relative) to define goodness of fit. Examples of how changes in certain parameters affect the shape of the titrimetric curve are also presented. The detection of metal ions is very important, causing a need for the development of a metal ion sensor that provides selectivity, sensitivity, real-time in situ monitoring, and a flexible design. In order to be able to perform in situ monitoring of trace metal ions, FRET-pair labeled peptides were attached to a Tentagel[trademark] resin surface. After soaking in nonmetal and metal solutions (pH = 7.5), the resin beads gave an enhanced response in the presence of Hg²⁺ and Zn²⁺. Using a t-test, the signals of the beads that were soaked in a solution of each of these metal ions (and that of Cd²⁺) were determined to be significantly different from beads soaked in a solution without metal. However, the standard deviation between a set the beads was too large in order to differentiate a bead that was soaked in nonmetal solution versus one soaked in a metal containing solution.Item Development of efficient, stable organic-inorganic hybrid solar cells(2012-08) Jayan, Baby Reeja; Manthiram, ArumugamDeveloping a fundamental understanding of photocurrent generation processes at organic-inorganic interfaces is critical for improving hybrid solar cell efficiency and stability. This dissertation explores processes at these interfaces by combining data from photovoltaic device performance tests with characterization experiments conducted directly on the device. The dissertation initially focuses on exploring how morphologically and chemically modifying the organic-inorganic interface, between poly(3-hexylthiophene) (P3HT) as the electron donating light absorbing polymer and titanium dioxide (TiO₂) as the electron acceptor, can result in stable and efficient hybrid solar cells. Given the heterogeneity which exists within bulk heterojunction devices, stable interfacial prototypes with well-defined interfaces between bilayers of TiO₂ and P3HT were developed, which demonstrate tunable efficiencies ranging from 0.01 to 1.6 %. Stability of these devices was improved by using Cu-based hole collecting electrodes. Efficiency values were tailored by changing TiO₂ morphology and by introducing sulfide layers like antimony trisulfide (Sb₂S₃) at the P3HT-TiO₂ interface. The simple bilayer device design developed in this dissertation provides an opportunity to study the precise role played by nanostructured TiO₂ surfaces and interfacial modifiers using a host of characterization techniques directly on a working device. Examples introduced in this dissertation include X-ray photoelectron spectroscopy (XPS) depth profiling analysis of metal-P3HT and P3HT-TiO₂ interfaces and Raman analysis of bonding between interface modifiers like Sb₂S₃ and P3HT. The incompatibility of TiO₂ with P3HT was significantly reduced by using P3HT derivatives with -COOH moieties at the extremity of a polymer chain. The role of functional groups like -COOH in interfacial charge separation phenomena was studied by comparing the photovoltaic behavior of these devices with those based on pristine P3HT. Finally, for hybrid solar cells discussed in this dissertation to become commercially viable, high temperature processing steps of the inorganic TiO₂ layer must be avoided. Accordingly, this dissertation demonstrates the novel use of electromagnetic radiation in the form of microwaves to catalyze growth of anatase TiO₂ thin films at temperatures as low as 150 °C, which is significantly lower than that used in conventional techniques. This low temperature process can be adapted to a variety of substrates and can produce patterned films. Accordingly, the ability to fabricate TiO₂ thin films by the microwave process at low temperatures is anticipated to have a significant impact in processing devices based on plastics.Item Efficient Estimation in a Regression Model with Missing Responses(2012-10-19) Crawford, ScottThis article examines methods to efficiently estimate the mean response in a linear model with an unknown error distribution under the assumption that the responses are missing at random. We show how the asymptotic variance is affected by the estimator of the regression parameter and by the imputation method. To estimate the regression parameter the Ordinary Least Squares method is efficient only if the error distribution happens to be normal. If the errors are not normal, then we propose a One Step Improvement estimator or a Maximum Empirical Likelihood estimator to estimate the parameter efficiently. In order to investigate the impact that imputation has on estimation of the mean response, we compare the Listwise Deletion method and the Propensity Score method (which do not use imputation at all), and two imputation methods. We show that Listwise Deletion and the Propensity Score method are inefficient. Partial Imputation, where only the missing responses are imputed, is compared to Full Imputation, where both missing and non-missing responses are imputed. Our results show that in general Full Imputation is better than Partial Imputation. However, when the regression parameter is estimated very poorly, then Partial Imputation will outperform Full Imputation. The efficient estimator for the mean response is the Full Imputation estimator that uses an efficient estimator of the parameter.Item Efficient Semiparametric Estimators for Nonlinear Regressions and Models under Sample Selection Bias(2012-10-19) Kim, Mi JeongWe study the consistency, robustness and efficiency of parameter estimation in different but related models via semiparametric approach. First, we revisit the second- order least squares estimator proposed in Wang and Leblanc (2008) and show that the estimator reaches the semiparametric efficiency. We further extend the method to the heteroscedastic error models and propose a semiparametric efficient estimator in this more general setting. Second, we study a class of semiparametric skewed distributions arising when the sample selection process causes sampling bias for the observations. We begin by assuming the anti-symmetric property to the skewing function. Taking into account the symmetric nature of the population distribution, we propose consistent estimators for the center of the symmetric population. These estimators are robust to model misspecification and reach the minimum possible estimation variance. Next, we extend the model to permit a more flexible skewing structure. Without assuming a particular form of the skewing function, we propose both consistent and efficient estimators for the center of the symmetric population using a semiparametric method. We also analyze the asymptotic properties and derive the corresponding inference procedures. Numerical results are provided to support the results and illustrate the finite sample performance of the proposed estimators.Item Essays on Efficiency Analysis(2010-07-14) Asava-Vallobh, NorabajraThis dissertation consists of four essays which investigate efficiency analysis, especially when non-discretionary inputs exist. A new approach of the multi-stage Data Envelopment Analysis (DEA) for non-discretionary inputs, statistical inference discussions, and applications are provided. In the first essay, I propose a multi-stage DEA model to address the non-discretionary input issue, and provide a simulation analysis that illustrates the implementation and potential advantages of the new approach relative to the leading existing multi-stage models of non-discretionary inputs, such as Ruggiero's 1998 model and Fried, Lovell, Schmidt, and Yaisawarng's 2002 model. Furthermore, the simulation results also suggest that the constant returns to scale assumption seems to be preferred when observations have similar sizes, but variable returns to scale may be more appropriate when their scales are different. In the second essay, I make comments on Simar and Wilson work of 2007. My simulation evidence shows that traditional statistical inference does not underperform the bootstrap process proposed by Simar and Wilson. Moreover, my results also show that the truncated model recommended by Simar and Wilson does not outperform the tobit model in terms of statistical inference. Therefore, the traditional method, t-test, and the tobit model should continue to be considered applicable tools for a multi-stage DEA model with non-discretionary inputs, despite contrary claims by Simar and Wilson. The third essay raises an example of applying my new approach to data from Texas school districts. The results suggest that a lagged variable (e.g. students' performance in the previous year), a variable which has been used in the literature, may not play an important role in determining efficiency scores. This implies that one may not need access to panel data on individual scores to study school efficiency. My final essay applies a standard DEA model and the Malmquist productivity index to commercial banks in Thailand in order to compare their efficiency and productivity before and after Thailand?s Financial Sector Master Plan (FSMP) that was implemented in 2004.Item Essays on Efficiency of the Farm Credit System and Dynamic Correlations in Fossil Fuel Markets(2012-11-28) Dang, Trang Phuong Th 1977-Markets have always changed in response to either exogenous or endogenous shocks. Many large events have occurred in financial and energy markets the last ten years. This dissertation examines market behavior and volatility in agricultural credit and fossil fuel markets under exogenous and endogenous changes in the last ten years. The efficiency of elements within the United States Farm Credit System, a major agricultural lender in the United States, and the dynamic correlation between coal, oil and natural gas prices, the three major fossil fuels, are examined. The Farm Credit system is a key lender in the U.S. agricultural sector, and its performance can influence the performance of the agricultural sector. However, its efficiency in providing credit to the agricultural sector has not been recently examined. The first essay of the dissertation provides assessments on the performance of elements within the Farm Credit System by measuring their relative efficiency using a stochastic frontier model. The second essay addresses the changes in relationship in coal, oil, and natural gas markets with respect to changes and turbulence in the last decade, which has also not been fully addressed in literature. The updated assessment on the relative performance of entities within the Farm Credit System provides information that the Farm Credit Administration and U.S. policy makers can use in their management of and policy toward the Farm Credit System. The measurement of the changes in fossil fuel markets? relationships provides implications for energy investment, energy portfolio anagement, energy risk management, and energy security. It can also be used as a foundation for structuring forecasting models and other models related to energy markets. The dynamic correlations between coal, oil, and natural gas prices are examined using a dynamic conditional correlation multivariate autoregressive conditional heteroskedasticity (MGARCH DCC) model. The estimated results show that the FCS?s five banks and associations with large assets have more efficiently produced credit to the U.S. agricultural sector than smaller sized associations. Management compensation is found to be positively associated with the system?s efficiency. More capital investment and monitoring along with possible consolidation are implied for smaller sized associations to enhance efficiency. On average, the results show that the efficiency of the associations is increasing over time while the average efficiency of the five large banks is more stable. Overall, the associations exhibit a higher variation of efficiency than the five banks. In terms of energy markets the estimates from the MGARCH DCC model indicate significant and changing dynamic correlations and related volatility between the coal, oil, and natural gas prices. The coal price was found to experience more volatility and become more closely related to oil and natural gas prices in recent periods. The natural gas price was found to become more stable and drift away from its historical relationship with oil.Item Extending the Petrel Model Builder for Educational and Research Purposes(2013-04-11) Nwosa, Obiajulu CReservoir Simulation is a very powerful tool used in the Oil and Gas industry to perform and provide various functions including but not limited to predicting reservoir performance, conduct sensitivity analysis to quantify uncertainty, production optimization and overall reservoir management. Compared to explored reservoirs in the past, current day reservoirs are more complex in extent and structure. As a result, reservoir simulators and algorithms used to represent dynamic systems of flow in porous media have invariably got just as complex. In order to provide the best solutions for analyzing reservoir performance, there is a need to continuously develop reservoir simulators and reservoir simulation algorithms that best represent the performance of the reservoir without compromising efficiency and accuracy. There exists several commercial reservoir simulation packages in the market that have been proven to be extremely resourceful with functionality that covers a wide range of interests in reservoir simulation yet there is the constant need to provide better and more efficient methods and algorithms to study and manage our reservoirs. This thesis aims at bridging the gap in the framework for developing these algorithms. To this end, this project has both an educational and research component. Educational because it leads to a strong understanding of the topic of reservoir simulation for students which can be daunting especially for those who require a more direct experience to fully comprehend the subject matter. It is research focused because it will serve as the foundation for developing a framework for integrating custom built external simulators and algorithms with the workflow of the model builder of our reservoir simulation package of choice i.e. Petrel with the Ocean programming environment in a seamless manner for simulating large scale multi-physics problems of flow in highly heterogeneous flow of porous media. Of particular interest are the areas of model order reduction and production optimization. In-house algorithms are being developed for these areas of interest and with the completion of this project. We hope to have developed a framework whereby we can take our algorithms specifically developed for areas of interest and add them to the workflow of the Petrel Model Builder. Currently, we have taken one of our in-house simulators i.e. a two dimensional, oil-water five-spot water flood pattern as a starting point and have been able to integrate it successfully into the ?Define Simulation Case? process of Petrel as an additional choice for simulation by an end user. In the future, we will expand this simulator with updates to improve its performance, efficiency and extend its capabilities to incorporate areas of research interest.Item A fundamental approximation in MATLAB of the efficiency of an automotive differential in transmitting rotational kinetic energy(2012-05) Vaughn, James Roy; Matthews, Ronald D.; Bryant, Michael D.The VCOST budgeting tool uses a drive cycle simulator to improve fuel economy predictions for vehicle fleets. This drive cycle simulator needs to predict the efficiency of various components of the vehicle's powertrain including any differentials. Existing differential efficiency models either lack accuracy over the operating conditions considered or require too great an investment. A fundamental model for differential efficiency is a cost-effective solution for predicting the odd behaviors unique to a differential. The differential efficiency model itself combines the torque balance equation and the Navier-Stokes equations with models for gear pair, bearing, and seal efficiencies under a set of appropriate assumptions. Comparison of the model with existing data has shown that observable trends in differential efficiency are reproducible in some cases to within 10% of the accepted efficiency value over a range of torques and speeds that represents the operating conditions of the differential. Though the model is generally an improvement over existing curve fits, the potential exists for further improvement to the accuracy of the model. When the model performs correctly, it represents an immense savings over collecting data with comparable accuracy.Item Hybrid powertrain performance analysis for naval and commercial ocean-going vessels(2012-08) Gully, Benjamin Houston; Seepersad, Carolyn C.; Webber, Michael E., 1971-; Hebner, Robert E.; Kiehne, Thomas M.; Chen, DongmeiThe need for a reduced dependence on fossil fuels is motivated by a wide range of factors: from increasing fuel costs, to national security implications of supply, to rising concern for environmental impact. Although much focus is given to terrestrial systems, over 90% of the world's freight is transported by ship. Likewise, naval warfighting systems are critical in supporting U.S. national interests abroad. Yet the vast majority of these vessels rely on fossil fuels for operation. The results of this thesis illustrate a common theme that hybrid mechanical-electrical marine propulsion systems produce substantially better fuel efficiency than other technologies that are typically emphasized to reduce fuel consumption. Naval and commercial powertrains in the 60-70 MW range are shown to benefit substantially from the utilization of mechanical drive for high speed propulsion; complemented by an efficient electric drive system for low speed operations. This hybrid architecture proves to be able to best meet the wide range of performance requirements for each of these systems, while also being the most easily integrated technology option. Naval analyses evaluate powertrain options for the DDG-51 Flight III. Simulation results using actual operational profile data show a CODLAG system produces a net fuel savings of up to 12% more than a comparable all-electric system, corresponding to a savings of 37% relative the existing DDG-51 powertrain. These results prove that a mechanical linkage for the main propulsion engine greatly reduces fuel consumption and that for power generation systems requiring redundancy, diesel generators represent a vastly superior option to gas turbines. For the commercial application it is shown that an augmented PTO/PTI hybrid system can better reduce cruise fuel consumption than modern sail systems, while also producing significant benefit with regard to CO2 emissions. In addition, using such a shaft mounted hybrid system for low speed electric drive in ports reduces NOx emissions by 29-43%, while CO is reduced 57-66% and PM may be reduced up to 25%, depending on the specific operating mode. As an added benefit, fuel consumption rates under these conditions are reduced 20-29%.Item The impact of delivery methods on the profitibility of commercial construction(2011-12) Herndon, Michael Brett; Nichols, Steven Parks, 1950-; McCann, Robert B.According to September 2011 information from the U.S. Census Bureau, the construction industry in the United States is valued at nearly eight hundred billion dollars annually. A 2004 collaborative study by Construction Industry Institute and Lean Construction Institute suggests that as much as fifty seven percent of time, effort, and material investment in construction projects do not add value to the final product. When compared with twenty six percent wastes in the manufacturing industry, it becomes obvious that the construction industry has a problem. Construction projects that come in over budget and behind schedule have become the rule rather than the exception, leading to contentious business relationships and costly litigation. This study will strive to identify and analyze the primary sources of these problems. Research and industry experience point to a lack of communication and cooperation among the various entities required to complete a construction project as the leading causes of waste in the industry. Further analysis suggests that traditional forms of construction contracts encourage adversarial and non-cooperative behavior between parties. Additionally, poor communication between various contributors opens the door for additional wasted cost. Fortunately, the development of tools such as Integrated Project Delivery (IPD) and Building Information Modeling (BIM) present new options to construction professionals that are proving to help address some of the challenges the industry faces today. IPD as a project delivery method creates a culture of collaboration and teamwork, where a culture of risk avoidance and conflict once stood, while BIM provides a platform for better communication among parties. When used together, these tools can reduce or eliminate many of the major sources of waste within the industry. This thesis will provide descriptions, analysis, and case studies that demonstrate the use of these tools and the potential they have to make a positive impact on the construction industry.Item Low Power High Efficiency Integrated Class-D Amplifier Circuits for Mobile Devices(2015-01-12) Colli-Menchi, Adrian IsraelThe consumer?s demand for state-of-the-art multimedia devices such as smart phones and tablet computers has forced manufacturers to provide more system features to compete for a larger portion of the market share. The added features increase the power consumption and heat dissipation of integrated circuits, depleting the battery charge faster. Therefore, low-power high-efficiency circuits, such as the class-D audio amplifier, are needed to reduce heat dissipation and extend battery life in mobile devices. This dissertation focuses on new design techniques to create high performance class-D audio amplifiers that have low power consumption and occupy less space. The first part of this dissertation introduces the research motivation and fundamentals of audio amplification. The loudspeaker?s operation and main audio performance metrics are examined to explain the limitations in the amplification process. Moreover, the operating principle and design procedure of the main class-D amplifier architectures are reviewed to provide the performance tradeoffs involved. The second part of this dissertation presents two new circuit designs to improve the audio performance, power consumption, and efficiency of standard class-D audio amplifiers. The first work proposes a feed-forward power-supply noise cancellation technique for single-ended class-D amplifier architectures to improve the power-supply rejection ratio across the entire audio frequency range. The design methodology, implementation, and tradeoffs of the proposed technique are clearly delineated to demonstrate its simplicity and effectiveness. The second work introduces a new class-D output stage design for piezoelectric speakers. The proposed design uses stacked-cascode thick-oxide CMOS transistors at the output stage that makes possible to handle high voltages in a low voltage standard CMOS technology. The design tradeoffs in efficiency, linearity, and electromagnetic interference are discussed. Finally, the open problems in audio amplification for mobile devices are discussed to delineate the possible future work to improve the performance of class-D amplifiers. For all the presented works, proof-of-concept prototypes are fabricated, and the measured results are used to verify the correct operation of the proposed solutions.