Browsing by Subject "Testing"
Now showing 1 - 20 of 22
Results Per Page
Sort Options
Item Austin Logistics Inc : assessing defect density(2010-12) Nanchari, Nithin Krishna; Perry, Dewayne E.; Krasner, HerbertAustin Logistics Inc. Solutions provides tools that help centralize resource management, optimize and maintain compliance of calling schedules for consumer financial service organization (banks, financial institutions). With the increasing number of customers, the amount of rework and availability of resources had been notably decreasing over time; thereby negatively affecting the overall cost and quality of the software being delivered. The improvement objectives of the company and its departments were broadly stated but lacking a goal-driven nature. The software measurement Goal-Question-Metric (GQM) approach was chosen and used for this research initiative to better support business driven quality improvement. Software defect density data was collected and analyzed to identify significant deviations in the software development life cycle.. The results of the initial analysis on the transformed defect-tracking data helped identify the negatively affected areas within the software development life cycle. The data showed significant variations in the requirements, design and implementation phases of the product life cycle, thus helping identify various process improvement opportunities. On quantifying the change in defect density, the effectiveness of using GQM has also provided valuable insights for process improvement. Based on these results, we were able to identify some of the weaknesses and shortcomings in our application development process.Item Automated bench setup for testing H-bridges(2007-12) Shankarasubrahmanyam, Vivek; Parten, Michael E.; Nutter, Brian; Gale, Richard O.Characterization is an important step before integrated circuits are produced for sale. Characterizing is expensive and time consuming. This paper analyzes the methods of characterizing and proposes an alternate solution that is not as expensive as testing on an ATE and is very fast when compared to characterizing on a bench setup. The solution involves automating the testing procedure on a bench setup. The programming to control the instruments is accomplished in LabVIEW. The data from the tests are analyzed for repeatability. A cost estimate is also developed to aid in determining the ideal testing method for different requirements.Item Automated bench setup to characterize and test integrated circuits efficiently(Texas Tech University, 2006-09) Ledbetter, Christopher M.; Parten, Michael E.; Cox, Ronald H.Discusses the importance of testing of integrated circuits and different testing techniques. The creation of the automated bench test setup is shown and the process to create on discussed. A specific part is picked to demonstrate an automated bench tester. The test setup, generation of the test signals, the programs required to automate the equipment, and the results are presented. LabVIEW is used to control and automate the test setup. It is an excellent at collecting, processing, and sending data.Item Automated bench setup to characterize and test integrated circuits efficiently(Texas Tech University, 2005-12) Ledbetter, Christopher Michael; Parten, Michael E.; Cox, Ronald H.Discusses the importance of testing of integrated circuits and different testing techniques. The creation of the automated bench test setup is shown and the process to create on discussed. A specific part is picked to demonstrate an automated bench tester. The test setup, generation of the test signals, the programs required to automate the equipment, and the results are presented. LabVIEW is used to control and automate the test setup. It is an excellent at collecting, processing, and sending data.Item Considering the disparate impact of test-based retention policy on low-income, minority, and English language learner children in Texas(2011-12) Patrick, Ertha Smith; Vasquez Heilig, Julian; Butler, Shari; Reddick, Richard; Rhodes, Lodis; Reyes, PedroThis dissertation evaluates disparate impact of test-based retention (TBR) policy on historically disadvantaged student groups in the State of Texas, and determines school characteristics that statistically predict retention and may contribute to disparate impact. The research literature on TBR is limited, as most grade retention research precedes the increase in use of TBR policy across the United States. Based on descriptive analysis, there were considerable increases in retention rates for low-income, African American, Latino, and English Language Learner (ELL) children compared to their less-disadvantaged counterparts, after TBR was implemented. Using multiple regression analysis, schools with higher percentages of low-income students, ELL students, beginning teachers, and higher percentages of low-income students in their school district were found to have higher retention rates while schools with higher percentages of White students, White teachers, and Latino teachers were found to have lower retention rates. Additionally, school retention rates were found to vary according to accountability rating.Item Design and fabrication of an instrument to test the mechanical behavior of aluminum alloy sheets during high-temperature gas-pressure blow-forming(2008-05) Vanegas Moller, Ricardo; Taleff, Eric M.Hydraulic bulge forming has been used as a method to determine the properties of sheet metal alloys in biaxial stretching at room temperature. Gas-pressure bulge forming alleviates the issues of using hydraulic fluids when the tests are conducted at high temperatures (above 200°C). Testing a sheet metal alloy by gas-pressure blow-forming (GPBF) under controlled temperature and pressure conditions requires an accurate and reliable mechanism that delivers repeatable results. It was the purpose of this work to design and implement such an instrument. This instrument should deliver real-time data for material displacement during forming, which can then be used to better understand material plastic response and formability. Four different subsystems within this mechanism must interact, but also have enough independence for analysis and for assembly purposes. The combined sub-systems produced a GPBF apparatus capable of forming a sheet aluminum alloy AA5182 with a thickness of 1.5 mm into a dome with a height nearly equal to its radius under a constant gas pressure as low as 40 psi at 450°C. This GPBF apparatus produced, for the first time, in-situ data for dome peak displacement during gas-pressure bulge forming of AA5182 sheet at 450°C.Item A deterministic, nonintrusive utility for efficiently testing transient state restoration on Android applications(2015-08) Delgado, Inaqui Raynaud; Khurshid, Sarfraz; Aziz, AdnanWhen a user interacts with an Android application, non-persistent data or transient state is created. Transient state may capture user input, data from resources on a device, or data retrieved from a network. Typically, Android applications retain their transient state throughout their entire life cycle. However, in some cases, Android may trigger the destruction of application components or the application itself resulting in transient state loss. If the transient state of an application is not properly saved and restored it may result in unexpected behavior or application crashes. The existing methods for testing transient state restoration on Android applications require using Android developer tools, advanced Android developer options, or running manual procedures on the device. There are no simple, efficient on-device options for testing transient state restoration on Android. This report describes Android events that trigger transient state loss. It describes how improper transient state restoration can result in unexpected behavior or application crashes. It provides an overview of existing transient state testing options discussing their limitations and shortcomings. It describes the design and implementation of the Transient State Restoration Testing Utility (TSRTU) highlighting its advantages over existing options. It illustrates how TSRTU is used to test an application. Lastly, it shows how TSRTU, a deterministic, nonintrusive utility can efficiently test transient state restoration on any Android application in seconds with a single touchscreen event.Item Development of a bridge fault extractor tool(Texas A&M University, 2005-02-17) Bhat, Nandan D.Bridge fault extractors are tools that analyze chip layouts and produce a realistic list of bridging faults within that chip. FedEx, previously developed at Texas A&M University, extracts all two-node intralayer bridges of any given chip layout and optionally extracts all two-node interlayer bridges. The goal of this thesis was to further develop this tool. The primary goal was to speed it up so that it can handle large industrial designs in a reasonable amount of time. A second goal was to develop a graphical user interface (GUI) for this tool which aids in more effectively visualizing the bridge faults across the chip. The final aim of this thesis was to perform FedEx output analysis to understand the nature of the defects, such as variation of critical area (the area where the presence of a defect can cause a fault) as a function of layer as well as defect size.Item Does team-based testing promote individual learning?(2011-05) Walker, Joshua David; Robinson, Daniel H.; Schallert, Diane; Svinicki, Marilla; Borich, Gary; Muir-Broaddus, JacquelineTeam-based testing gives students a chance to earn additional points on individual unit tests by immediately re-taking the test as a team competing against other teams. This instructional approach has enjoyed widening implementation and impressive anecdotal support, but there remains a dearth of empirical studies evaluating its prescribed processes and promoted outcomes. Although the posited effectiveness and appeal of team-based testing seem consistent with the benefits of test-enhanced learning and collaborative learning in general, several limitations are readily apparent. Namely, the current format of the individual and team readiness assurance tests is expressly multiple-choice. Though there are some advantages of this type of question (e.g., ease of administering and grading), the long-term cognitive disadvantage relative to short-answer questions is well documented. Furthermore, it is not clear whether the proposed gain in learning through this format is attributable to the group effect -- be it social or cognitive, or simply to repeated exposure to the test items. Therefore, this study measured the effects of initial test question Format (short-answer vs. multiple-choice), Mode (individual vs. group), and Exposure (once vs. twice) on four delayed measures of learning: Old multiple-choice items (ones students had initially been tested over), Old short-answer items, New multiple-choice items, and New short-answer items. Two weeks after watching a video-recorded lecture, 208 college students took a thirty-item test comprising both the old and new items in multiple-choice and short-answer formats. Results revealed that 1) taking an initial test twice is better than once when the delayed test has old short-answer items or new multiple-choice items, 2) taking an initial short-answer test is better than multiple choice when the delayed test has either old multiple-choice, old short-answer, or new multiple-choice items, and 3) taking an initial team test is no different than taking an individual test when it comes to long-term learning. Particularly noteworthy from these results is how a) the effects of short-answer tests and taking tests twice are not present within Team conditions, and b) taking a multiple-choice test twice is as effective as taking a short-answer test once. Implications are discussed in light of learning theory and instructional practice.Item Dynamic Pressure Improvements to Closed-Circuit Wind Tunnels with Flow Quality Analysis(2015-03-31) Herring, AlexanderTesting of aerodynamic loads on a sub-scale model has been the most accurate way to predict full-scale loads for many years. Even with modern advances in computing technology and computational fluid dynamics (CFD), each computer-aided model must be calibrated against a known standard, usually found through wind tunnel testing. Because wind tunnel testing is usually performed on sub-scale models, flow speeds that span the flight envelope are commonly tested. Traditionally the Texas A&M Engineering Experiment Station Low-Speed Wind Tunnel (LSWT) was limited through available power to a dynamic pressure of 120 psf. The addition of a higher power motor, construction of a new, smaller test section, diffuser liners to prevent flow separation, and increased structure to withstand higher static pressures allows for flow speeds up to 240 psf, nominally Mach 0.4. With proper design and construction, flow quality can be maintained to less than 1% deviation from mean flow velocity. Additionally, an accurate prediction of flow speed for a given test section geometry and power draw can be found.Item An evaluation of item difficulty and person ability estimation using the multilevel measurement model with short tests and small sample sizes(2011-05) Brune, Kelly Diane; Beretvas, Susan Natasha; Dodd, Barbara G.; Pituch, Keenan A.; Powers, Daniel A.; Zimmaro, Dawn M.Recently, researchers have reformulated Item Response Theory (IRT) models into multilevel models to evaluate clustered data appropriately. Using a multilevel model to obtain item difficulty and person ability parameter estimates that correspond directly with IRT models’ parameters is often referred to as multilevel measurement modeling. Unlike conventional IRT models, multilevel measurement models (MMM) can handle, the addition of predictor variables, appropriate modeling of clustered data, and can be estimated using non-specialized computer software, including SAS. For example, a three-level model can model the repeated measures (level one) of individuals (level two) who are clustered within schools (level three). Limitations in terms of the minimum sample size and number of test items that permit reasonable one-parameter logistic (1-PL) IRT model’s parameters have not been examined for either the two- or three-level MMM. Researchers (Wright and Stone, 1979; Lord, 1983; Hambleton and Cook, 1983) have found that sample sizes under 200 and fewer than 20 items per test result in poor model fit and poor parameter recovery for dichotomous 1-PL IRT models with data that meet model assumptions. This simulation study tested the performance of the two-level and three-level MMM under various conditions that included three sample sizes (100, 200, and 400), three test lengths (5, 10, and 20), three level-3 cluster sizes (10, 20, and 50), and two generated intraclass correlations (.05 and .15). The study demonstrated that use of the two- and three-level MMMs lead to somewhat divergent results for item difficulty and person-level ability estimates. The mean relative item difficulty bias was lower for the three-level model than the two-level model. The opposite was true for the person-level ability estimates, with a smaller mean relative parameter bias for the two-level model than the three-level model. There was no difference between the two- and three-level MMMs in the school-level ability estimates. Modeling clustered data appropriately; having a minimum total sample size of 100 to accurately estimate level-2 residuals and a minimum total sample size of 400 to accurately estimate level-3 residuals; and having at least 20 items will help ensure valid statistical test results.Item Experimental Assessment of Water Based Drilling Fluids in High Pressure and High Temperature Conditions(2012-10-19) Ravi, AshwinProper selection of drilling fluids plays a major role in determining the efficient completion of any drilling operation. With the increasing number of ultra-deep offshore wells being drilled and ever stringent environmental and safety regulations coming into effect, it becomes necessary to examine and understand the behavior of water based drilling fluids - which are cheaper and less polluting than their oil based counterpart - under extreme temperature and pressure conditions. In most of the existing literature, the testing procedure is simple - increase the temperature of the fluid in steps and record rheological properties at each step. A major drawback of this testing procedure is that it does not represent the continuous temperature change that occurs in a drilling fluid as it is circulated through the well bore. To have a better understanding of fluid behavior under such temperature variation, a continuous test procedure was devised in which the temperature of the drilling fluid was continuously increased to a pre-determined maximum value while monitoring one rheological parameter. The results of such tests may then be used to plan fluid treatment schedules. The experiments were conducted on a Chandler 7600 XHPHT viscometer and they seem to indicate specific temperature ranges above which the properties of the drilling fluid deteriorate. Different fluid compositions and drilling fluids in use in the field were tested and the results are discussed in detail.Item Fast error detection with coverage guarantees for concurrent software(2013-05) Coons, Katherine Elizabeth; McKinley, Kathryn S.Concurrency errors are notoriously difficult to debug because they may occur only under unexpected thread interleavings that are difficult to identify and reproduce. These errors are increasingly important as recent hardware trends compel developers to write more concurrent software and to provide more concurrent abstractions. This thesis presents algorithms that dynamically and systematically explore a program's thread interleavings to manifest concurrency bugs quickly and reproducibly, and to provide precise incremental coverage guarantees. Dynamic concurrency testing tools should provide (1) fast response -- bugs should manifest quickly if they exist, (2) reproducibility -- bugs should be easy to reproduce and (3) coverage -- precise correctness guarantees when no bugs manifest. In practice, most tools provide either fast response or coverage, but not both. These goals conflict because a program's thread interleavings exhibit exponential state- space explosion, which inhibits fast response. Two approaches from prior work alleviate state-space explosion. (1) Partial-order reduction provides full coverage by exploring only one interleaving of independent transitions. (2) Bounded search provides bounded coverage by enumerating only interleavings that do not exceed a bound. Bounded search can additionally provide guarantees for cyclic state spaces for which dynamic partial-order reduction provides no guarantees. Without partial-order reduction, however, bounded search wastes most of its time exploring executions that reorder only independent transitions. Fast response with coverage guarantees requires both approaches, but prior work failed to combine them soundly. We combine bounded search with partial-order reduction and extensively analyze the space of dynamic, bounded partial-order reduction strategies. First, we prioritize with a best-first search and show that heuristics that combine these approaches find bugs quickly. Second, we restrict partial-order reduction to combine approaches while maintaining bounded coverage. We specialize this approach for several bound functions, prove that these algorithms guarantee bounded coverage, and leverage dynamic information to further reduce the state space. Finally, we bound the partial order on a program's transitions, rather than the total order on those transitions, to combine these approaches without sacrificing partial-order reduction. This algorithm provides fast response, incremental coverage guarantees, and reproducibility. We manifest bugs an order of magnitude more quickly than previous approaches and guarantee incremental coverage in minutes or hours rather than weeks, helping developers find and reproduce concurrency errors. This thesis makes bounded stateless model checking for concurrent programs substantially more efficient and practical.Item A fracture mechanics approach to accelerated life testing for cathodic delamination at polymer/metal interfaces(2013-05) Mauchien, Thomas Kevin; Liechti, K. M.This work presents a fracture mechanics analysis of the cathodic delamination problem for the polyurethane/titanium and polyurea/steel interfaces. The nonlinear behavior of both polymers was investigated. The recent Marlow model was used to define the strain energy function of the polymers. Viscoelastic effects of the polyurea were also studied. The Marlow model was associated with a nine-term Prony series. This model was seen to represent experimental data relatively well for a wide range of strain rates both in tension and compression. The driving force for delamination, the strain energy release rate G, is presented for both interfaces. Cathodic delamination data for several temperatures are presented as crack growth rate as a function of crack driving force. The approach recognizes that both temperature and stress can be used as accelerated life testing parameters.Item Framework for testing Java concurrency(2010-12) Heidt, David Patrick; Garg, Vijay K. (Vijay Kumar), 1963-; Krasner, HerbConcurrent programming has become ubiquitous in the arena of application development, requiring most production quality systems to deal with at least some degree of multi-threaded execution. An increasing level of maturity is developing around the impact of concurrency on the design and testing processes. Much of this knowledge focuses on the functional aspect of the design and execution with success measures typically related to the correctness of a program. However, there exists a gap in the research to date around the process for concurrent performance testing. While many companies acknowledge that performance is a major source of complaints in production environments, performance testing historically receives low priority and is often little more than an extension of the functional testing. Possibly the most widely discussed and understood implementation language today, in terms of multi-threaded programming, is Java. The report outlines a standard framework for concurrent performance testing targeted towards Java based applications. In an effort to vet the framework, we execute a series of practical concurrent testing that address some of the most common aspects of concurrent programming in Java, with a particular focus on the Java Concurrency package. As a result, this report presents a portable, extensible framework that designers can use in evaluating the range of concurrency options available in Java within their particular environment. Additionally, it provides specific insight into the performance of these options in a typical run-time environment. This includes particular attention to the comparison of traditional lock based approach to non-blocking algorithms.Item MARKOV source based test length optimized scan built-in-self-test architecture(Texas Tech University, 2008-08) Farooqi, Aftab A.This dissertation presents several algorithmic and hardware design improvements to some of the recently proposed works using Markov sources for the scan built-in-test architecture. The first improvement is the use of the total probability rule and on-chip quantized probabilities to compute the sampling probability of the deterministic test cubes. Test cubes with low sampling probability are excluded from the final test set used to compute the transition probabilities. The second improvement is the use of new technique called dynamic transition selection, which combines transition inversion and transition fixing to produce test sequences. The third improvement is a new hardware design of the Markov source. Automatic Test Pattern Generator (ATPG) and fault simulator (HOPE) academic tools are used for generating deterministic test cubes and fault simulation, respectively. Espresso is used for logic minimization. The Sequential circiuit Synthesis tool (SIS) is used to map the synthesized design into a generic nand-nor library. Gate Equivalent (GE) count method [18] that reflects a static Complementary Metal Oxide Semiconductor (CMOS) technology: 0.5 GE for an inverter or a transmission multiplexer, (0.5)(n) GE’s for an n-input nand or nor, and (2.5)(n-1) GE’s for an n-input eXclusive-or (XOR) is used. The 5 larger International Symposium on Circuits and Systems (ISCAS89) benchmark circuits are tested using the new test pattern generator. The new test pattern generator achieves complete coverage of the stuck-at faults at signficantly reduced test length, with a modest increase in the gate count.Item Modest : Modeling, Debugging, and Testing distributed programs(2016-12) Rosales, David Andrew; Garg, Vijay K. (Vijay Kumar), 1963-Modest (Modeling, Debugging, and Testing) is a graphical modeling and testing environment for simulating the execution of distributed systems. Its objective is to assist as a learning tool but more importantly to aid in the design and implementation of distributed algorithms. It builds the simulation environment which means that only the algorithm is required from the user to perform testing. Logging and message animations help understand what events have occurred. Modest has the ability to replicate real life scenarios by inflicting network latency, network failures, and server failures. With the ability to quickly customize environment configuration and options, custom algorithm simulation can be initiated in minimal amounts of time. The concept of distributed computing can be complicated and Modest helps to simplify it with a modern user interface design.Item Multi-generational test plan generation and execution in advanced mixed signal controllers(2011-05) Eravelli, Shruti; Gale, Richard O.; Bayne, Stephen B.Most integrated circuits are evolutionary. This is especially true in the realm of system-on-a-chip (SoC) devices that combine multiple functions monolithically. Electronic systems that begin life as an entire printed circuit board often see smaller and smaller chip counts as designs mature. In some cases, functions will be combined into multichip modules that co-locate separate integrated circuits in a single package to provide additional levels of signal integrity and achieve cost reductions. This process continues through stages that culminate in the monolithic integration of these separate chips. The requirement to differentiate similar functions for different customers and applications results in families of SoC’s with similar but not identical capabilities. As parametric and functional testing become larger and larger contributors to total cost, avoiding duplication of effort is a key factor in maintaining competitive position and market share. The strategies involved in achieving economies of scale that can be realized by recognizing the similarities between family members while still providing for differentiation where required is a subject of great interest currently. This work traces the development of test capability in such a family through several generations. An approach that utilizes a motherboard to take advantage of the similarities between family members and is combined with specialized hardware realized in a series of daughter boards, and differentiated software as well is described through several design iterations. Debugging both hardware and software while looking for ways to streamline testing and further reduce test time and cost is detailed. The result is a cost effective approach to advanced device testing that does not compromise performance and provides for acceptable levels of fault coverage.Item Ram pressure correlations for aspirated cylinders(Texas Tech University, 2004-05) Scholz, Zachary JamesDesign of automobile cooling systems involves tradeoffs in the sizing of grille openings to provide adequate cooling airflow and the tendency to reduce grille opening size to decrease vehicle cooling drag and produce aesthetically pleasing designs. Air that enters the cooling system of an automobile is driven by two major sources, the freestream dynamic pressure resulting from the forward motion of the vehicle and the internal vacuum created by the underhood fan. The flow fields associated with both sources must be considered when assessing the cooling performance of a new automobile design. The current investigation focuses on characterizing the external or dynamic pressure induced flow through a parameter known as the ram coefficient. The investigation utilized an aspirated cylinder in cross-flow as an idealized representation of an automobile front end with grille openings. The pressure distribution on the upstream side of the cylinder model includes a stagnation point and a significant surface pressure gradient similar to those of an actual automobile front end fascia. Various sized openings machined into the side of the cylinder model simulated the grille openings in an automobile. A flexible hose connecting one end of the cylinder to a shop vacuum provided a simulation of the cooling air flow induced by a radiator fan. The primary advantage of the cylinder model is a dramatic reduction in the number of experimental influences on the ram coefficient. The elimination of the various underhood components simplifies the investigation process down to the most basic components, yielding accurate, repeatable results. Primary results are that the cylinder does provide a useful representation of automobile front end. These results verify the general trends seen in previous full scale model tests. Additionally, it was found that ram coefficients for single openings are determined by opening size and location relative to the external surface pressure distribution. It was also found that ram coefficients for combinations of openings can be predicted from knowledge of the performance characteristics of the individual openings.Item Systematic techniques for efficiently checking Software Product Lines(2013-12) Kim, Chang Hwan Peter; Batory, Don S., 1953-; Khurshid, SarfrazA Software Product Line (SPL) is a family of related programs, which of each is defined by a combination of features. By developing related programs together, an SPL simultaneously reduces programming effort and satisfies multiple sets of requirements. Testing an SPL efficiently is challenging because a property must be checked for all the programs in the SPL, the number of which can be exponential in the number of features. In this dissertation, we present a suite of complementary static and dynamic techniques for efficient testing and runtime monitoring of SPLs, which can be divided into two categories. The first prunes programs, termed configurations, that are irrelevant to the property being tested. More specifically, for a given test, a static analysis identifies features that can influence the test outcome, so that the test needs to be run only on programs that include these features. A dynamic analysis counterpart also eliminates configurations that do not have to be tested, but does so by checking a simpler property and can be faster and more scalable. In addition, for runtime monitoring, a static analysis identifies configurations that can violate a safety property and only these configurations need to be monitored. When no configurations can be pruned, either by design of the test or due to ineffectiveness of program analyses, runtime similarity between configurations, arising due to design similarity between configurations of a product line, is exploited. In particular, shared execution runs all the configurations together, executing bytecode instructions common to the configurations just once. Deferred execution improves on shared execution by allowing multiple memory locations to be treated as a single memory location, which can increase the amount of sharing for object-oriented programs and for programs using arrays. The techniques have been evaluated and the results demonstrate that the techniques can be effective and can advance the idea that despite the feature combinatorics of an SPL, its structure can be exploited by automated analyses to make testing more efficient.