Browsing by Subject "Databases"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Application of relational database principles for rating bituminous coarse aggregates with respect to frictional performance(Texas Tech University, 2000-08) Rachakatla, PrasannaThe design approach that is commonly employed to ensure satisfactory skid resistance of bituminous pavement surfaces is to control the quality of coarse aggregates used in pavement construction. Traditionally, state and federal highway agencies have relied on the results of laboratory tests for this purpose. Among the laboratory tests, those commonly used are Polish Value Test, Acid Insoluble Residue Test and Petrographic Analysis. The findings from many research studies indicate that the reliability that can be achieved by using a single laboratory test is poor. In the current research study at Texas Tech, a comprehensive laboratory and field test program was undertaken with the objective of developing an improved procedure for predicting field skid resistance performance of bituminous aggregates. The field test program included monitoring of 55 pavement test sections that were located in various climatic zones within the state of Texas over a 3-year study period. As a part of this monitoring program skid resistance of the pavement at 64 km/h, British pendulum number, and pavement macrotexture were measured. The laboratory test program consisted of complete characterization of the pavement coarse aggregates using the following test methods: Polish Values Test, Magnesium Sulfate Test, LA Abrasion Test, Acid Insoluble Residue Test and Petrographic Analysis. The skid resistance data collected over the 3-year study period was then used to develop a "Skid Performance Rating" for each pavement section. Subsequently appropriate statistical analyses were conducted to develop regression models that related skid performance rating to various laboratory test parameters. The findings revealed that a better correlation is obtained when aggregates are categorized into sub-groups that contain aggregates with similar mineralogical makeup. Accordingly, aggregates were categorized based on percent carbonate minerals and the Acid Insoluble Residue. Statistical regression models were then developed for each aggregate category. As an alternative means, historical data on skid resistance of pavements constructed with a given aggregate can be used in the evaluation of aggregates. This alternative procedure is used by TxDOT to overcome the shortcomings of using laboratory test data. Highway agencies may use either of the above mentioned procedures to evaluate the performance of an aggregate source for use in constructing pavement test sections. A combination of the two approaches mentioned above may result in predicting field skid resistance on pavement surface courses with a greater degree of reliability. However, an approach that uses these two methods involves dealing with a large amount of laboratory and field test data. A user may find it extremely difficult and cumbersome to maintain and use this information to reliably predict the skid resistance of pavement test sections. In order to achieve the objective of faster reliable prediction of aggregate field skid performance, a application tool was developed. This application, 'SKIDRATE', was specifically designed to address the problem of predicting skid resistance on Hot Mix Asphalt Concrete (HMAC) pavement surfaces. SKIDRATE combines Relational Database Management Systems (RDBMS) principles and statistical regression techniques to evaluate aggregate sources to be used in the construction of HMAC pavement surfaces. An Entity-Relationship Data model was used to analyze and design the RDBMS. Important entities and association among the entities were identified along with the respective cardinalities of the association. Primary and foreign keys were determined for the relations in the RDBMS. The relations were normalized to 3NF in most of the situations. The application enables the storage of data about the aggregate source, results of laboratory tests and details of field skid testing. Users of the application can retrieve the required information on any given aggregate source and process the data using the results of a comprehensive statistical regression analysis that is integrated within the application. This integration of database technology and statistical regression analyses facilitates fast, easy and reliable interpretation of the field and laboratory test results. The application can be used as a convenient tool by engineers in transportation departments to evaluate the suitability of an aggregate for use in pavement surface courses.Item Integrating programming languages and databases via program analysis and language design(2009-12) Wiedermann, Benjamin Alan; Cook, William Randall; Batory, Don; Blackburn, Stephen M.; Lin, Calvin; McKinley, Kathryn S.Researchers and practitioners alike have long sought to integrate programming languages and databases. Today's integration solutions focus on the data-types of the two domains, but today's programs lack transparency. A transparently persistent program operates over all objects in a uniform manner, regardless of whether those objects reside in memory or in a database. Transparency increases modularity and lowers the barrier of adoption in industry. Unfortunately, fully transparent programs perform so poorly that no one writes them. The goal of this dissertation is to increase the performance of these programs to make transparent persistence a viable programming paradigm. This dissertation contributes two novel techniques that integrate programming languages and databases. Our first contribution--called query extraction--is based purely on program analysis. Query extraction analyzes a transparent, object-oriented program that retrieves and filters collections of objects. Some of these objects may be persistent, in which case the program contains implicit queries of persistent data. Our interprocedural program analysis extracts these queries from the program, translates them to explicit queries, and transforms the transparent program into an equivalent one that contains the explicit queries. Query extraction enables programmers to write programs in a familiar, modular style and to rely on the compiler to transform their program into one that performs well. Our second contribution--called RBI-DB+--is an extension of a new programming language construct called a batch block. A batch block provides a syntactic barrier around transparent code. It also provides a latency guarantee: If the batch block compiles, then the code that appears in it requires only one client-server communication trip. Researchers previously have proposed batch blocks for databases. However, batch blocks cannot be modularized or composed, and database batch blocks do not permit programmers to modify persistent data. We extend database batch blocks to address these concerns and formalize the results. Today's technologies integrate the data-types of programming languages and databases, but they discourage programmers from using procedural abstraction. Our contributions restore procedural abstraction's use in enterprise applications, without sacrificing performance. We argue that industry should combine our contributions with data-type integration. The result would be a robust, practical integration of programming languages and databases.Item No relation: the mixed blessings of non-relational databases(2009-08) Varley, Ian Thomas; Aziz, Adnan; Miranker, Daniel P.This paper investigates a new class of database systems loosely referred to as "non-relational databases," which offer a subset of traditional relational database functionality, in exchange for improved scalability, performance, and / or simplicity. We explore the differences in conceptual modeling techniques, and examine both the advantages and limitations of several classes of currently available systems, using running examples of real-world problems as implemented in both a traditional relational database model, as well as several non-relational models.Item Relational database for Ecuadorian mammals deposited in museums around the world(Texas Tech University, 2007-08) Estupiñán, Juan Pablo Carrera; Baker, Robert J.; Edson, Gary F.; Ladkin, NicolaNatural history collections play an essential role in the conservation and study of the biodiversity of our planet. Since the 19th century, increasing collections of fauna from Ecuador have been deposited in numerous institutions around the world. These collections have allowed a better understanding of the distribution and systematics of Neotropical mammals. During 2006, an extensive survey based on scientific literature, natural history museum databases, and personal communications with museum’s staff, was carried out to update our knowledge about collections of Ecuadorian mammals. The main goal of this project was to create a central database, hosted at the Museum of Texas Tech University, with the list of institutions that hold those specimens, dates of collections, taxa represented, and regions surveyed in Ecuador. A total of 42 institutions from South America, North America, and Europe have been identified. An effective collaboration with 28 of these 42 institutions made it possible to compile more than 20,000 records allowing the creation of a centralized database. The system has the advantage of being simple and easily accessed via internet. The information is organized by Geography and Taxonomy criteria allowing queries without limitations. The scope of this project demonstrates effective collaboration among natural history museums in the 21st century.Item The use of modern digital technology to store and serve biodiversity data for research and educational purposes(2015-12) Brenskelle, Laura Marie; Rowe, Timothy, 1953-; Bell, Christopher J; Brown, Matthew; Karadkar, UnmilHerein I describe two different projects I completed during the course of my Master’s at The University of Texas at Austin. These projects broadly focused on the application of technology to maintain scientific data for research and education. The first chapter is a case study of a website I developed as part of a group project in a graduate database management course. Our group took a module from proprietary instructional software developed in the 1990s, and moved it into an online format with a MySQL database on the backend. In chapter one, I provide the appropriate documentation for this project to be expanded in the future. The second chapter describes a project where I interviewed collection managers of natural history collections about their database practices. These practices have implications for the downstream use of these data for research, education, and conservation. As technology inevitably advances, this thesis will serve as a historical snapshot of modern practices, and today, it can provide a starting point of how to further the emerging discipline of biodiversity informatics.