Browsing by Subject "Energy efficiency"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
Item Analytical methods and strategies for using the energy-water nexus to achieve cross-cutting efficiency gains(2013-12) Sanders, Kelly Twomey; Webber, Michael E., 1971-Energy and water resources share an important interdependency. Large quantities of energy are required to move, purify, heat, and pressurize water, while large volumes of water are necessary to extract primary energy, refine fuels, and generate electricity. This relationship, commonly referred to as the energy-water nexus, can introduce vulnerabilities to energy and water services when insufficient access to either resource inhibits access to the other. It also creates areas of opportunity, since water conservation can lead to energy conservation and energy conservation can reduce water demand. This dissertation analyzes both sides of the energy-water nexus by (1) quantifying the extent of the relationship between these two resources and (2) identifying strategies for synergistic conservation. It is organized into two prevailing themes: the energy consumed for water services and the water used in the power sector. In Chapter 2, a national assessment of United States' energy consumption for water services is described. This assessment is the first to quantify energy embedded in water at the national scale with a methodology that differentiates consistently between primary and secondary uses of energy for water. The analysis indicates that energy use in the residential, commercial, industrial, and power sectors for direct water and steam services was approximately 12.3 quadrillion BTU or 12.6% of 2010 annual primary energy consumption in the United States. Additional energy was used to generate steam for indirect process heating, space heating, and electricity generation. Chapter 3 explores the potential energy and emissions reductions that might follow regional shifts in residential water heating technologies. Results suggest that the scale of energy and emissions benefits derived from shifts in water heating technologies depends on regional characteristics such as climate, electricity generation mix, water use trends, and population demographics. The largest opportunities for energy and emissions reductions through changes in water heating approaches are in locations with carbon dioxide intensive electricity mixes; however, these are generally areas that are least likely to shift toward more environmentally advantageous devices. In Chapter 4, water withdrawal and consumption rates for 310 electric generation units in Texas are incorporated into a unit commitment and dispatch model of ERCOT to simulate water use at the grid scale for a baseline 2011 case. Then, the potential for water conservation in the power generation sector is explored. Results suggest that the power sector might be a viable target for cost-effective reductions in water withdrawals, but reductions in water consumption are more difficult and more expensive to target.Item Assessing the performance of demand-side strategies and renewables : cost and energy implications for the residential sector(2015-05) Bouhou, Nour El Imane; Machemehl, Randy B.; Blackhurst, Michael F.; Caldas, Carlos H; Olmstead, Sheila M; Hersh, MatthewMany public and private entities have heavily invested in efficiency measures and renewable sources to generate energy savings and reduce fossil fuel consumption. Private utilities have invested over $4 billion in energy efficiency with 56% of these investments directed towards consumer incentives. However, the magnitude of the expected savings and the effectiveness of the technological measures remain uncertain. Multiple studies attribute the reasons driving these uncertainties to behavioral phenomena such as “the rebound effect.” This work provides insights on the uncertainties generating potential differences between expected and observed performances of demand-side measures (DSM) and distributed generation strategies, using mixed methods that employ both empirical analyses and engineering economics. This study also provides guidelines to stakeholders to effectively use the benefits from DSM strategies towards asset preservation for affordable multifamily houses. Section 2 describes how joint efficiency gains compare to similar singular efficiency gains for single-family households and discusses the implications of these differences. This work provides empirical models of marginal technical change for multiple residential electricity end-uses, including space conditioning technologies, appliances, devices, and electric vehicles. Results indicate that the relative household level of technological sophistication significantly influences the performance of demand-side measures, particularly the presence of a programmable thermostat. As to space conditioning, results demonstrate that sufficient consistent technical improvement leads to net energy savings, which could be due to technical factors or to a declining marginal rebound effect. Section 3 empirically evaluates the performance of distributed residential photovoltaic (PV) solar panels and identifies the technological and demographic factors influencing PV performance and adoption choice. Results show that modeling PV adoption choice significantly impacts the household energy demand, suggesting that the differences in the actual evaluated behavioral responses and the self-reported changes in electricity consumption are more complex than assumed by other studies. The analysis indicates that electricity use decreases marginally for PV adopters if sufficient efficiency improvements in space conditioning are made. Results further imply that households that adopt solar panels might “take back” roughly 24% of the annual electricity production for PV technologies. Section 4 describes replicable engineering economic models for estimating conventional rehabilitation, energy, and water retrofit costs for low-income multi-family housing units. The purpose of this study is to prioritize policy interventions aimed at maintaining property location and use, and to identify the capital investment needs that could be partially provided by local and state housing authorities. Section 5 synthesizes the work, describes the future work, provides guidelines for local and state efficiency program administrators, and insights on prioritizing and designing efficiency interventions.Item Beam lift - a study of important parameters : (1) well bore orientation effects on liquid entry into the pump. (2) Pumping unit counterbalance effects on power usage. (3) Pump friction and the use of sinker bars.(2016-05) Carroll, Grayson Michael; Bommer, Paul Michael; Espinoza, David N.This study will discuss three different aspects of rod pumping. Chapter 1 will focus on flow regimes associated with low-pressure horizontal wells. By understanding how oil and gas interact with each other both in the horizontal and vertical portions of the wellbore, downhole pump assemblies can be optimized to increase pump fillage. The addition of a flexible dip tube into the horizontal section of the wellbore allows for the ability to set the pump above the kickoff point but is only effective the dip tube can be engineered to be submerged in the fluid. Chapter 2 is an evaluation of the effect of pumping unit counterbalance on power consumption. If the pumping unit is out of balance, it will generate power during portions of the stroke. The Motorwise motor controller attempts to save power by shutting off the motor and allowing the rotational inertia of the unit to operate the pump. Although this device does save power on pumping units that are out of balance, it was concluded that it was of little use if the operator can maintain balance of the pumping unit. Chapter 3 discusses the role of viscous friction in rod string design and the importance of sinker bars in counteracting compression forces at pump level. On the downstroke, the plunger must overcome any mechanical friction as well as the viscous friction from fluid flowing through the traveling valve and the annular space between the plunger and inside of the barrel. In a barrel completely full of liquid, the plunger will establish a free fall velocity. If the plunger is required to fall faster than free fall, the plunger must be pushed. If the plunger can reach terminal velocity with additional weight, the increase in viscous friction inside the pump equals the weight added. Thus, the critical plunger velocity for the onset of buckling of a ¾” sucker rod varies depending on the viscosity of the fluid. Adding a single 1.5”, 25-foot sinker bar is sufficient to counteract the compression from viscous pump friction up to practical pumping speed limits.Item Energy analysis of toplighting strategies for office buildings in Austin(2012-12) Motamedi, Sara; Garrison, Michael; Novoselac, Atila; Whitsett, DasonThe purpose of this study is to determine the energy impacts of daylighing through toplights in a hot humid climate. Daylight in the working environment improves the quality of the space, and productivity of employees. In addition, natural light is a free energy resource. On one hand, a proper design of daylight such as distributed toplights can reduce the electrical lighting consumption. On the other hand, in a hot climate like Austin heat gain is a major concern. Therefore, this thesis is shaped around this question: Can toplighting strategies save energy in Austin despite the fact that buildings receive more direct heat gain through toplights? The importance of daylighting is more revealed since electrical lighting takes up a significant portion of the total building energy use (21%). In this thesis I investigated the reduction of lighting electricity and compared that with the total effects of toplights on external conductance, lighting heat gain and solar gain. The results of my thesis show that regarding the site energy a proper toplighting strategy can save electrical lighting up to (70%) with smaller impact on heating and cooling loads. This means that toplights generally can be energy efficient alternatives for a one storey office building. Developing my research I studied which toplights are more efficient: north sawtooth roofs, south sawtooth roofs, monitor roofs or very simple skylights. I compared different toplighting strategies and provided a design guide containing graphs of site energy, source energy, annual cost saving per square feet, as well as light distribution of each toplight. I believe this can accelerate implementation of efficient toplighting strategies in the design process. Concluding how significantly efficient daylighting is over heat gain, I finalized my research by comparison of skylights with different visible transmission (VT) and solar heat gain coefficient (SHGC). The major result of this thesis is that proper toplighting strategies can save energy despite the increased solar gain. It is anticipated that the thesis findings will promote the implementation of toplighting strategies and higher VT glass type in the energy efficient building industry.Item Energy-efficient mechanisms for managing on-chip storage in throughput processors(2012-05) Gebhart, Mark Alan; Keckler, Stephen W.; Burger, Douglas C.; Erez, Mattan; Fussell, Donald S.; Lin, Calvin; McKinley, Kathryn S.Modern computer systems are power or energy limited. While the number of transistors per chip continues to increase, classic Dennard voltage scaling has come to an end. Therefore, architects must improve a design's energy efficiency to continue to increase performance at historical rates, while staying within a system's power limit. Throughput processors, which use a large number of threads to tolerate memory latency, have emerged as an energy-efficient platform for achieving high performance on diverse workloads and are found in systems ranging from cell phones to supercomputers. This work focuses on graphics processing units (GPUs), which contain thousands of threads per chip. In this dissertation, I redesign the on-chip storage system of a modern GPU to improve energy efficiency. Modern GPUs contain very large register files that consume between 15%-20% of the processor's dynamic energy. Most values written into the register file are only read a single time, often within a few instructions of being produced. To optimize for these patterns, we explore various designs for register file hierarchies. We study both a hardware-managed register file cache and a software-managed operand register file. We evaluate the energy tradeoffs in varying the number of levels and the capacity of each level in the hierarchy. Our most efficient design reduces register file energy by 54%. Beyond the register file, GPUs also contain on-chip scratchpad memories and caches. Traditional systems have a fixed partitioning between these three structures. Applications have diverse requirements and often a single resource is most critical to performance. We propose to unify the register file, primary data cache, and scratchpad memory into a single structure that is dynamically partitioned on a per-kernel basis to match the application's needs. The techniques proposed in this dissertation improve the utilization of on-chip memory, a scarce resource for systems with a large number of hardware threads. Making more efficient use of on-chip memory both improves performance and reduces energy. Future efficient systems will be achieved by the combination of several such techniques which improve energy efficiency.Item Evaluating an energy efficiency project for an existing commercial building(2011-12) Krasner, William Paul; Nichols, Steven Parks, 1950-; Duvic, Robert Conrad, 1947-In this thesis I provide general guidelines for a commercial building owner’s decision making process for heating, ventilation, and air-conditioning (HVAC) system energy efficiency projects, discuss an example HVAC project at an existing building, and recommend the most energy-efficient, cost-effective project option. First, a building’s HVAC system’s inefficiencies are identified. The systems and the components can be investigated to understand the nature of the operations. In the building owner’s interests, possible alternatives can be developed to address the systems with improvements. Consulting engineers, contractors, and other building professionals can assist in this process. There are necessary engineering and construction considerations for defining realistic project alternatives. With the alternatives, there are costs, benefits, and trade-offs. The costs, which mainly include the investment and the operational costs, and the benefits, which mainly include the available financial incentives, defined in dollars, are identified for the alternatives. The alternatives can be evaluated with Building Life Cycle Cost (BLCC) software. In this evaluation the net present-value (NPV) method is used to rank the alternatives. Then, the highest-ranking, lowest life-cycle cost, alternative is recommended for the owner. In the example, an existing commercial building’s HVAC systems are considered. The construction plans, the facilities records, and the existing field conditions were investigated and analyzed. A few operational inefficiencies were identified. To address two of these existing inefficiencies, there were alternatives considered to replace the standard-efficiency air handling unit motors with premium-efficiency motors and to renovate the ventilation system with an energy recovery wheel. The investment costs, the available rebates, the net annual energy savings, and the energy and other operational costs were estimated, over a 30-year study period, for each of these alternatives, and compared to the costs of the existing system. The BLCC evaluations were performed across a range of discount rates in the present-value calculations. Based on the lowest present-value life-cycle cost reports, the premium-efficiency motor replacement project only is recommended.Item E³ : energy-efficient EDGE architectures(2010-08) Govindan, Madhu Sarava; Keckler, Stephen W.; Burger, Douglas C.; McKinley, Kathryn S.; Chiou, Derek; Hunt, Jr., Warren A.; Brooks, DavidIncreasing power dissipation is one of the most serious challenges facing designers in the microprocessor industry. Power dissipation, increasing wire delays, and increasing design complexity have forced industry to embrace multi-core architectures or chip multiprocessors (CMPs). While CMPs mitigate wire delays and design complexity, they do not directly address single-threaded performance. Additionally, programs must be parallelized, either manually or automatically, to fully exploit the performance of CMPs. Researchers have recently proposed an architecture called Explicit Data Graph Execution (EDGE) as an alternative to conventional CMPs. EDGE architectures are designed to be technology-scalable and to provide good single-threaded performance as well as exploit other types of parallelism including data-level and thread-level parallelism. In this dissertation, we examine the energy efficiency of a specific EDGE architecture called TRIPS Instruction Set Architecture (ISA) and two microarchitectures called TRIPS and TFlex that implement the TRIPS ISA. TRIPS microarchitecture is a first-generation design that proves the feasibility of the TRIPS ISA and distributed tiled microarchitectures. The second-generation TFlex microarchitecture addresses key inefficiencies of the TRIPS microarchitecture by matching the resource needs of applications to a composable hardware substrate. First, we perform a thorough power analysis of the TRIPS microarchitecture. We describe how we develop architectural power models for TRIPS. We then improve power-modeling accuracy using hardware power measurements on the TRIPS prototype combined with detailed Register Transfer Level (RTL) power models from the TRIPS design. Using these refined architectural power models and normalized power modeling methodologies, we perform a detailed performance and power comparison of the TRIPS microarchitecture with two different processors: 1) a low-end processor designed for power efficiency (ARM/XScale) and 2) a high-end superscalar processor designed for high performance (a variant of Power4). This detailed power analysis provides key insights into the advantages and disadvantages of the TRIPS ISA and microarchitecture compared to processors on either end of the performance-power spectrum. Our results indicate that the TRIPS microarchitecture achieves 11.7 times better energy efficiency compared to ARM, and approximately 12% better energy efficiency than Power4, in terms of the Energy-Delay-Squared (ED²) metric. Second, we evaluate the energy efficiency of the TFlex microarchitecture in comparison to TRIPS, ARM, and Power4. TFlex belongs to a class of microarchitectures called Composable Lightweight Processors (CLPs). CLPs are distributed microarchitectures designed with simple cores and are highly configurable at runtime to adapt to resource needs of applications. We develop power models for the TFlex microarchitecture based on the validated TRIPS power models. Our quantitative results indicate that by better matching execution resources to the needs of applications, the composable TFlex system can operate in both regimes of low power (similar to ARM) and high performance (similar to Power4). We also show that the composability feature of TFlex achieves a signification improvement (2 times) in the ED² metric compared to TRIPS. Third, using TFlex as our experimental platform, we examine the efficacy of processor composability as a potential performance-power trade-off mechanism. Most modern processors support a form of dynamic voltage and frequency scaling (DVFS) as a performance-power trade-off mechanism. Since the rate of voltage scaling has slowed significantly in recent process technologies, processor designers are in dire need of alternatives to DVFS. In this dissertation, we explore processor composability as an architectural alternative to DVFS. Through experimental results we show that processor composability achieves almost as good performance-power trade-offs as pure frequency scaling (no changes in supply voltages), and a much better performance-power trade-off compared to voltage and frequency scaling (both supply voltage and frequency change). Next, we explore the effects of additional performance-improving techniques for the TFlex system on its energy efficiency. Researchers have proposed a variety of techniques for improving the performance of the TFlex system. These include: (1) block mapping techniques to trade off intra-block concurrency with communication across the operand network; (2) predicate prediction and (3) operand multi-cast/broadcast mechanism. We examine each of these mechanisms in terms of its effect on the energy efficiency of TFlex, and our experimental results demonstrate the effects of operand communication, and speculation on the energy efficiency of TFlex. Finally, this dissertation evaluates a set of fine-grained power management (FGPM) policies for TFlex: instruction criticality and controlled speculation. These policies rely on a temporally and spatially fine-grained dynamic voltage and frequency scaling (DVFS) mechanism for improving power efficiency. The instruction criticality policy seeks to improve power efficiency by mapping critical computation in a program to higher performance-power levels, and by mapping non-critical computation to lower performance-power levels. Controlled speculation policy, on the other hand, maps blocks that are highly likely to be on correct execution path in a program to higher performance levels, and the other blocks to lower performance levels. Our experimental results indicate that idealized instruction criticality and controlled speculation policies improve the operating range and flexibility of the TFlex system. However, when the actual overheads of fine-grained DVFS, especially energy conversion losses of voltage regulator modules (VRMs), are considered the power efficiency advantages of these idealized policies quickly diminish. Our results also indicate that the current conversion efficiencies of on-chip VRMs need to improve to as high as 95% for the realistic policies to be feasible.Item Fate of the Houston skyline : stategies adopted for rehabilitating mid-century modern high-rises(2014) Srinivasan, Urmila; Holleran, MichaelA recent report by Terrapin Bright Green “Mid-century (Un) Modern” discusses the desperate condition of mid-century modern high-rises in Manhattan. The article argues that it would be beneficial both economically and environmentally to demolish these buildings and build new ones with an assumed increase in FAR. To re-build, repair or re-skin are the questions Mid-century Modern High-rises (MMH) face today. This study focuses on Houston, Texas, which is very different from New York City both climatically and from a planning stand point. It is dreaded for its hot and humid climate and notorious for its consistent refusal to adopt any zoning. These high-rises in Houston represent the economic success of the city immediately after WWII. These buildings were constructed as the city transformed from the Bayou City to the Space city. In this study I have mapped the status of these high-rises and the strategies that were used to renovate them. The questions I further wish to address are how preservation or energy efficiency are addressed while renovating these buildings. Even preservationists might agree that all buildings are not equal and a new look would benefit some. The real challenge lies in resolving the grey areas, where one is not talking about a Seagram or a Lever House, but a well designed environmentally sensitive building.Item Low temperature heat and water recovery from supercritical coal plant flue gas(2015-08) Reimers, Andrew Samuel; Webber, Michael E., 1971-; Buckingham, Fred PFor this work, I constructed an original thermodynamic model to estimate waste heat and water recovery from the flue gas of a supercritical coal plant burning lignite, subbituminous, or bittuminous coal. This model was written in MATLAB as a list of linear equations based on first and second law analyses of the power plant components. This research is relevant because coal accounted for the largest increase in primary energy consumption worldwide as recently as 2013. Coal-fired electricity generation is particularly water intensive. As populations increase, especially in the developing world, much of the increased demand for electricity will be provided by new coal-fired power plants. One way to improve the efficiency of a coal-fired power plant is to recover the low temperature waste heat from the flue gas and use it to preheat combustion air or boiler feedwater. A low temperature economizer or flue gas cooler can be used for this purpose to achieve overall efficiency improvements as high as 0.4%. However, a side effect of the efficiency improvements is an increase in water consumption factor of nearly 10%. The water consumption factor can be reduced with the addition of a flue gas dryer after the flue gas cooler. The flue gas dryer is a condensing heat exchanger between the flue gas and ambient air. As the flue gas cools, its water content condenses and can be recovered and treated for use within the plant. In general, the results indicate that low temperature waste heat and water recover from boiler flue gas would be more feasible and beneficial for coal plants burning lignite as opposed to higher quality coal. Because these plants already have a lower efficiency, the relative increase in efficiency is somewhat higher. Similarly, the relative increase in water consumption factor is somewhat lower for a lignite plant. The high moisture content and dew point of the flue gas produced from lignite combustion makes it easier to recover water with a flue gas dryer. The higher water recovery factor along with the lower water consumption factor means that a greater percentage of the water evaporated in the cooling tower can be recovered in the flue gas dryer of a lignite plant than for a plant burning higher quality coal.Item Modeling and optimization for energy efficient large scale cooling operation(2013-12) Kapoor, Kriti; Edgar, Thomas F.Optimal chiller loading (OCL) is described as a means to improve the energy efficiency of a chiller plant operation. It is formulated as a multi-period constrained mixed integer non-linear optimization problem to optimize the total cooling load distribution through accurate chiller models. OCL is solved as a set of quadratic programs using sequential programming algorithm (SQP) in MATLAB. Based on application of the methodology to chiller systems at UT Austin and a semiconductor manufacturing facility, OCL can result in an annual energy savings of about 8%. However, the savings may reduce considerably in case of additional physical constraints on overall plant operation. With the addition of thermal energy storage (TES) to the system, OCL can reduce the daily cooling costs in the case of time varying electricity prices by 13.45% on an average. The energy efficiency of a chiller plant as a function of its chiller arrangement is studied by using fitted chiller models. If all other variables are kept same, chillers operating in parallel consume up to 9.62% less power as compared to when they are operated in series. Otherwise, chillers may operate up to 12.26% more efficiently in series depending on their chilled water outlet temperature values. The answer to the optimal chiller arrangement can be straightforward in some cases or can be a complex optimization problem in others.Item Performance and energy efficiency via an adaptive MorphCore architecture(2014-05) Khubaib; Patt, Yale N.The level of Thread-Level Parallelism (TLP), Instruction-Level Parallelism (ILP), and Memory-Level Parallelism (MLP) varies across programs and across program phases. Hence, every program requires different underlying core microarchitecture resources for high performance and/or energy efficiency. Current core microarchitectures are inefficient because they are fixed at design time and do not adapt to variable TLP, ILP, or MLP. I show that if a core microarchitecture can adapt to the variation in TLP, ILP, and MLP, significantly higher performance and/or energy efficiency can be achieved. I propose MorphCore, a low-overhead adaptive microarchitecture built from a traditional OOO core with small changes. MorphCore adapts to TLP by operating in two modes: (a) as a wide-width large-OOO-window core when TLP is low and ILP is high, and (b) as a high-performance low-energy highly-threaded in-order SMT core when TLP is high. MorphCore adapts to ILP and MLP by varying the superscalar width and the out-of-order (OOO) window size by operating in four modes: (1) as a wide-width large-OOO-window core, 2) as a wide-width medium-OOO-window core, 3) as a medium-width large-OOO-window core, and 4) as a medium-width medium-OOO-window core. My evaluation with single-thread and multi-thread benchmarks shows that when highest single-thread performance is desired, MorphCore achieves performance similar to a traditional out-of-order core. When energy efficiency is desired on single-thread programs, MorphCore reduces energy by up to 15% (on average 8%) over an out-of-order core. When high multi-thread performance is desired, MorphCore increases performance by 21% and reduces energy consumption by 20% over an out-of-order core. Thus, for multi-thread programs, MorphCore's energy efficiency is similar to highly-threaded throughput-optimized small and medium core architectures, and its performance is two-thirds of their potential.Item Principled control of approximate programs(2015-12) Sui, Xin, Ph.D.; Pingali, Keshav; Chiou, Derek; Dhillon, Inderjit; Fussell, Donald S.; Ramachandran, VijayaIn conventional computing, most programs are treated as implementations of mathematical functions for which there is an exact output that must computed from a given input. However, in many problem domains, it is sufficient to produce some approximation of this output. For example, when rendering a scene in graphics, it is acceptable to take computational short-cuts if human beings cannot tell the difference in the rendered scene. In other problem domains like machine learning, programs are often implementations of heuristic approaches to solving problems and therefore already compute approximate solutions to the original problem. This is the key insight for the new research area, approximate computing, which attempts to trade-off such approximations against the cost of computational resources such as program execution time, energy consumption, and memory usage. We believe that approximate computing is an important step towards a more fundamental and comprehensive goal that we call information-efficiency. Current applications compute more information (bits) than are needed to produce their outputs, and since producing and transporting bits of information inside a computer requires energy/computation time/memory usage, information-inefficient computing leads directly to resources inefficiency. Although there is now a fairly large literature on approximate computing, system researchers have focused mostly on what we can call the forward problem; that is, they have explored different ways in both hardware and software to introduce approximations in a program and have demonstrated that these approximations can enable significant execution speedups and energy savings with some quality degradation of the result. However, these efforts do not provide any guarantee on the amount of the quality degradation. Since the acceptable amount of degradation usually depends on the scenario in which the application is deployed, it is very important to be able to control the degree of approximation. In this dissertation, we refer to this problem as the inverse problem. Relatively little is known about how to solve the inverse problem in a disciplined way. This dissertation makes two contributions towards solving the inverse problem. First, we investigate a large set of approximate algorithms from a variety of domains in order to understand how approximation is used in real-world applications. From this investigation, we determine that many approximate programs are tunable approximate programs. Tunable approximate programs have one or more parameters called knobs that can be changed to vary the quality of the output of the approximate computation as well as the corresponding cost. For example, an iterative linear equation solver can vary the number of iterations to trade quality of the solution versus the execution time, a Monte Carlo path tracer can change the number of sampling light paths to trade the quality of the resulting image against execution time, etc. Tunable approximate programs provide many opportunities for trading accuracy versus cost. By carefully analyzing these algorithms, we have found a set of patterns for how approximation is applied in tunable programs. Our classification can be used to identify new approximation opportunities in programs. A second contribution of this dissertation is an approach to solving the inverse problem for tunable approximate programs. Concretely, the problem is to determine knob settings to minimize the cost while keeping the quality degradation within a given bound. There are four challenges: i) for real-world applications, the quality and cost are usually complex non-linear functions of the knobs and these functions are usually hard to express analytically; ii) the quality and the cost for an application vary greatly for different inputs; iii) when an acceptable quality degradation bound is presented, determining the knob setting has to be very efficient so that the extra overhead incurred by the identification will not exceed the cost saved by the approximation; and iv) the approach should be general so that it can be applied to many applications. To meet these requirements, we formulate the inverse problem as a constrained optimization problem and solve it using a machine learning based approach. We build a system which uses machine learning techniques to learn cost and quality models for the program by profiling the program with a set of representative inputs. Then, when a quality degradation bound is presented, the system searches these error and cost models to identify the knob settings which can achieve the best cost savings while simultaneously guaranteeing the quality degradation bound statistically. We evaluate the system with a set of real world applications, including a social network graph partitioner, an image search engine, a 2-D graph layout engine, a 3-D game physics engine, a SVM solver and a radar signal processing engine. The experiments showed great savings in execution time and energy savings for a variety of quality bounds.Item Quantifying the economic and environmental tradeoffs of electricity mixes in Texas, including energy efficiency potential using the Rosenfeld effect as a basis for evaluation(2010-12) Lott, Melissa Christenberry; Webber, Michael E., 1971-; Schmidt, PhilipElectricity is a complex and interesting topic for research and investigation. From a systems level, electricity includes many steps from its generation (power plants) to transmission and distribution to delivery and final use. Within each of these steps are a set of tradeoffs that are region-specific, depending heavily on the types of generation technologies and input fuels used to generate the electricity. These tradeoffs are complex and often not positively correlated to one another, producing a web of information that makes conclusions regarding the net benefit of changes to the electricity generation mix unobvious and difficult to determine using general rules of thumb. As individuals look to change the mix of technologies and fuels used to generate electricity for environmental or economic reasons, this complex web results in a lack of clarity and understanding of the consequences of particular choices. Quantitative tools could provide individuals with clear information and improved understanding of the tradeoffs associated with changes to the electricity mix. Unfortunately, prior to this research, no such tools existed that provided a clear, rigorous, and unbiased quantitative comparison of the region-specific environmental and economic tradeoffs associated with changes to the electricity mix. This research filled this gap by developing a methodology for calculating the environmental and economic impacts of changes to the electricity generation mix for individual regions. This methodology was applied specifically to Texas to develop the Texas Interactive Power Simulator (TIPS), an interactive online tool accessible via the internet. This tool is currently used for direct instruction at The University of Texas at Austin for undergraduate courses. Preliminary data were collected to determine the usefulness of this tool as a classroom aid. These data revealed that a majority of students enjoy using the TIPS tool, felt that they learned about the tradeoffs of electricity generation methods by using TIPS, and wish that there were more learning tools like TIPS available to them. This research also investigated the potential to use energy efficiency to satisfy a portion of the electricity demand that would otherwise be supplied using a generation technology. The methodology and series of decision criteria that were developed with this investigation were used to determine the amount of generation that could reasonably be satisfied with energy efficiency technologies and supportive policies for a particular region of interest, in this case Texas. This methodology was established using the Rosenfeld Effect as a basis for evaluating the energy efficiency potential in a specific region, providing a more realistic maximum energy efficiency value than using theoretical maximum gains based on current best available technology. It was then compared to efficiency potential estimates by the American Council for an Energy-Efficient Economy (ACEEE) and the Public Utility Commission of Texas (PUCT). In this research, I found that Texas is unlikely to realize more than an annual savings of 11% or about 1.5 megawatt-hours per capita compared to 2007 use levels based on nominal energy efficiency approaches. When this potential savings was applied to offset future demand increases in Texas, it was found that new generation capacity would still be needed over the next few decades to meet increasing total electricity demand. I used the economic and environmental tradeoff analysis and energy efficiency limitations methodologies that I established in my research to calculate the economic and environmental tradeoffs of changes to the electricity mix resulting from several scenarios, including federal energy and climate legislation, nuclear renaissance, high wind power growth, and maximizing energy efficiency. The outputs from these scenarios yielded the following observations: 1. Energy efficiency is unlikely to replace more than 11% of total per capita electricity demand in Texas. This level of energy efficiency might reduce total demand in the state, but population growth and its corresponding impacts on state electricity use might outpace the savings from energy efficiency in the long-term. This population growth could result in an overall increase in total annual state electricity use, despite energy efficiency gains. 2. While nuclear power might be environmentally advantageous from the standpoint of total emission of greenhouse gases compared to fossil fuel-fired power plants, it has very high up-front capital costs and is very water-intensive. 3. A federal combined energy efficiency and renewable portfolio standard might require states to install new renewable power generation capacity. In some states, including Texas, the amount of required new generation capacity may be small because of existing state initiatives encouraging renewable generation capacity to be installed in the state and the potential to offset some generation requirements using energy efficiency.Item Refining building energy modeling through aggregate analysis and probabilistic methods associated with occupant presence(2013-08) Stoppel, Christopher Michael; Leite, FernandaThe building sector represents the largest energy consumer among the United States' end use sectors. As a result, the public and private sector will continue to place great emphasis on designing energy efficient buildings that minimize operating costs while maintaining a healthy environment for its occupants. Creating design-phase building energy models can facilitate the process of selecting life-cycle appropriate design strategies aimed at maximizing building energy efficiency. The primary objective of this research study is to gain greater insight into likely causes of variation between energy predictions derived from building energy models and building energy performance during post-occupancy. Identifying sources of error can be used to improve future modeling efforts that can potentially lead to greater accuracy and better decisions made during the building's design phase. My research approach is to develop a method for conducting retrospective analysis of building energy models in the areas that affect the building's predicted and actual energy consumption. This entails collecting pre-construction and post-occupancy related data from various entities that exhibit influence on the building's energy performance. The method is then applied to recently-constructed military dormitory buildings that utilized building energy modeling and now have actual, metered building energy consumption data. The study also examines how building occupancy impacts energy performance. The value of this work will provide additional insight to future building energy modeling efforts.Item Sustainable energy roadmap for Austin : how Austin Energy can optimize its energy efficiency(2010-12) Johnston, Andrew Hayden, 1979-; Oden, Michael; Spelman, WilliamThis report asks how Austin Energy can optimally operate residential energy efficiency and demand side management programs including demand response measures. Efficient energy use is the act of using less energy to provide the same level of service. Demand side management encompasses utility initiatives that modify the level and pattern of electrical use by customers, without adjusting consumer behavior. Demand side management is required when a utility must respond to increasing energy needs, or demand, by its customers. In order to achieve the 20% carbon emissions and 800 MW peak demand reductions mandate of the Generation, Resource and Climate Plan, AE must aggressively pursue an increase in customer participation by expanding education and technical services, enlist the full functionality of a smart grid and subsequently reduce energy consumption, peak demand, and greenhouse gas emissions. Energy efficiency is in fact the cheapest source of energy that Austin Energy has at its disposal between 2010 and 2020. But this service threatens Austin Energy’s revenues. With the ascent of onsite renewable energy generation and advanced demand side management, utilities must address the ways they generate revenues. As greenhouse gas emissions regulations lurk on the horizon, the century-old business model of “spinning meters” will be fundamentally challenged nationally in the coming years. Austin Energy can develop robust analytical methods to determine its most cost-effective energy efficiency options, while creating a clear policy direction of promoting energy efficiency while addressing the three-fold challenges of peak demand, greenhouse gas emissions and total energy savings. This report concludes by providing market-transforming recommendations for Austin Energy.