Browsing by Subject "Database management"
Now showing 1 - 20 of 22
Results Per Page
Sort Options
Item A homogeneous temporal extension of QUEL(1984-12) Vaishnav, Jay H.The role of time in databases is being studied by researchers increasingly. Temporal query languages have been proposed for a number of historical database models. This thesis contains a survey of some important temporal relational database models and their query languages. A new temporal query language is proposed for a temporal database model.Item A tool for the conceptual design of a database(Texas Tech University, 1983-08) Srinath, BalakrishnanThe present study addresses the problem of logical database design in the framework of relational data model and semantic information in the form of functional dependencies (FDs) and multivalued dependencies (MVDs). An algorithm, that takes as inputs the different attribute names, along with FDs and MVDs, has been presented which (i) generates a minimal cover of the given set of FDs and MVDs, and (ii) generates a logical database schema. The schema designed (i) preserves all the original information, (ii) minimizes redundancy, and (iii) represents "independent relationships" by a set of independent relation schemes.Item Active learning and compilation of higher order schema integration queries(2005) Barbanson, François Gérard; Miranker, Daniel P.After nearly 30 years, database integration remains the province of engineers and application developers. In an informal proof, Krishnamurthy, Litwin and Kent [KLK91] demonstrated that only higher order relational languages such as SchemaSQL and SchemaLog [LSS96] are general enough to concisely describe the merging of data from multiple heterogeneous sources. However, those languages are incrementally harder to program than SQL. A modern solution is to provide a GUI language with the same capabilities. However, a Query-by-Example (QBE) [Zloof77] type interface does not allow the unambiguous specification of higher order data integrating queries. We propose an architecture comprising three layers. In the middle layer, the user expresses the desired federated view through a QBE inspired user interface. Further learning proceeds via a sample selection method asking the user to validate examples of federated records. This interaction ends when the system is satisfied it has converged to the exact view definition the user intends. The bottom layer provides the execution mechanism for higher order data manipulations by compiling higher order relational definitions into first-order SQL programs. The top layer component of the architecture assists the learning algorithm by collecting meta-data and catalog statistics. Our primary contributions comprise a taxonomy examining trade-offs between complexity and completeness and identifying various classes of higher order relational data manipulations. The architecture delimits three separate challenges, which must be overcome in order to propose a solution. Our compilation for SchemaSQL proposes novel theoretical complexity guarantees. Type-based vertical partitioning of the meta- data ensures that the result can be properly optimized by existing SQL engines. Sample selection constraints specific to databases require the introduction of a third kind of instance label in the training set of our learner. We derive a new algorithm by modifying Mitchell’s version spaces [Mitchell82] in order to handle this new kind of label. We prove that the modified algorithm preserves the original properties of version spaces and avoids the possibility of deadlock. We introduce a sample selection heuristic that converts catalog statistics into a classic inductive bias. Finally, we develop the Sphinx prototype, carry out experiments and demonstrate the system on an application.Item ADM: Architecural database management(2002-12) Kelso, Shana N.To complete a Post-Professional degree in Architecture, with a concentration in Historic Preservation, a prototype architectural reference database will be created for the thesis. Using Microsoft Visual FoxPro 5, an architectural reference database will operate as an independent software program. To illustrate the database's functionality and usability, the input data will compare Dutch/Flemish architectural styles and construction methods in Amsterdam and New England between 1600 and 1800. The thesis will include a written portion, an illustrated glossary of terms, and photographs of selected examples of styles. This type of project is necessary because the existing architectural data are not standardized and contradictions often occur. The primary problem with the architectural record is that it is a built record. The normal passage of time exposes the structure to degradation due to environmental effects, while understanding and knowledge about the structure are also left open to viewer interpretation. The written portion of the architectural record (i.e., histories, analyses, and descriptions) is a varied, widely dispersed, and a dynamic body of knowledge, thereby making data compilation difficult and ad hoc queries next to impossible. Few of the written records are comprehensive, even fewer include in-depth studies, and still fewer are electronic or computer oriented. To regulate this architectural evidence, a relational architectural reference database needs to be created. An Architectural Reference Database will be useful to anyone involved with or interested in architecture. Students studying architectural history will find it helpful in understanding how various architectural styles are related. Architectural historians and professors will use it as more comprehensive architectural textbook. Preservationists will be able to use the database in identifying and documenting structures. An Architectural Reference Database is a tool that needs to be created and provides appropriate demonstration of research skills to warrant a Post-Professional degree in Architecture, wuh a concentration in Historic Preservation.Item Algorithms for improving query processing programs(Texas Tech University, 1987-05) Desai, Rajan SNot availableItem An experimental authoring system for CAD applications(Texas Tech University, 1985-08) Yakoob, Nisar AhmedWriting software for interactive CAD systems is costly, inflexible and difficult- It requires many man years to develop a fool free interactive system because the dialog between the user and the system is very conplex. Authoring systems reduce the investment needed to produce applications by providing a set of tools suited to the domain of those applications. This thesis investigates how an engineering workstation, the Unix environment and an engineering database system can be combined to create an authoring system for CAD applications. The terra authoring system for CAD application is used in this thesis to mean a system for development of CAD applications. The investigation was performed by implementing a simple integrated circuit layout application. Engineering workstations are powerful single user computers. A typical station will have its own mass storage device, a high speed interface to other stations and a medium resolution graphics screen. Unix is an operating system which allows programs that were written independently to communicate using pipes, filters and mailboxes- An engineering database system provides a set of tools for creating, maintaining and mutating the data in a CAD application. A system which combines these tools should considerably reduce the software investment needed to produce a CAD application.Item Assignment of global information system coordinates to classical museum localities for relational database analyses(Texas Tech University, 1999-12) Knyazhnitskiy, OleksiyMany decisions are made based on information concerning the flora and fauna of the world. With the development of a large number of technological breakthroughs, such as computers, DNA sequencers, satellite imagery, and image analyzers, etc. the volume of knowledge available conceming plants and animals is rapidly expanding and has grown beyond our ability to examine each study and data set in the classically employed "hands on" analyses. To more effectively share and interrogate data sets, a new field of science has evolved called bioinformatics. At the heart of bioinformatics is the ability to use computers to examine massive data files in a critical synthesis. These syntheses employ relational databases to examine the geographical and temporal relationships compared to other data sets. The Museum of Texas Tech University has been archiving biological specimens as a source of information on biocomplexity, disease, affects of agriculture, etc. These collections of biological voucher specimens are a valuable source of information that may be explored in a relational format. A new Relational Database Management System was designed to perform operations and increase the purpose of the electronic database (Monk, 1998). The Natural Science Research Lab's (NSRL) current collection was constructed to meet the needs of scientists and biologists, and increase the potential of the collection using the ongoing technological development of computer software and hardware (Baker et al., 1996). A problem to such use is that the data have to be in a format that is compatible with computer analysis. For example, a location such as 10 MI S LUBBOCK cannot be recognized in a geographical context without assistance and extra computer time. Two types of locality data. Universal Transverse Mercator coordinates (UTM) and longitude and latitude, can be easily utilized by computer software. UTM coordinates are numerical data that depict exact geographical locations on a map. A world map is divided into 60 zones. To assign UTM coordinates for a specific location, the position within defined zones is established. For instance, the state of Texas is situated in zones thirteen, fourteen and fifteen (see Figure 1.1). Units express UTM coordinates in meters, so that the accuracy of a geographical location is no more than one meter.Item Design and development of a mini-computer interactive data base management system(Texas Tech University, 1979-08) Omer, Mohammed Hasan AbdulrahimNot availableItem Design and implementation of an extended relational database model(Texas Tech University, 1983-12) Mohamed, ZiauddinAn experimental implementation of an extended relational database model is described. Observations about the performance of the model are made, and some possible modifications are suggested. The extended model has been designed to make the conventional Relational model more suitable for engineering applications. Its principal feature is that it allows the static properties of graphical entities to be modeled more easily and directly.Item Effective data access in software IO frameworks(2003) Page, David Scott; Vin, Harrick M.Advances in computer communications technologies have led to new classes of applications; these applications exchange data across networks using diverse data representations and encodings. While a computer programming language implementation intrinsically supports efficient access to data elements defined in that language, accessing (producing or consuming) IO data in its external exchange format, called a transfer syntax, requires specialized binding code to accommodate mismatches in the local and external data representations. Providing effective input–output (IO) data access means transparently accommodating these intra– and inter–layer syntactic complexities in order to extend to the programmer of IO data processing layers the convenience, efficiency, and accuracy of access to IO data automatically provided for programming language defined data. This dissertation presents the theory, design, and implementation of a syntax directed binding facility that achieves effective IO data access. An improved programming practice introduces an abstraction boundary between the mechanisms of IO data syntax navigation and access, and the layer semantics or policies associated with the IO data values. The domain–specific language Xyn and its bit–level lexicon Blex succinctly specify the IO data syntax that defines the abstraction boundary. The Xyn compiler, Xync, codifies the implicit mapping between the labeled elements of a declarative Xyn/Blex specification and their presentation as identically labeled native programming language structures; i.e., Xync produces the binding code necessary to navigate and access the IO data synvii tax elements. Finally, an inter–layer optimization framework called MetaXyn exploits the intra–layer syntactic attributes exposed by Xyn to optimize the inter–layer composition. The Xyn language has been used to specify the Internet Protocol version 4, and the Xync–generated binding code was evaluated in a modern software network router framework. We present and evaluate our results, and discuss our experiences.Item Enhanced classification through exploitation of hierarchical structures(2007) Punera, Kunal Vinod Kumar; Ghosh, JoydeepItem Implementation of a data manager for computer graphics(Texas Tech University, 1985-05) Lahiri, SubhenduThe Generalization/Aggregation data model is an enhancement of the relational data model. It shows considerable promise for engineering applications because it allows abstract items to be stored in relational databases. However, the G/A model is only a data model. It cannot be used in database applications unless a data language exists to manipulate data items with that model. A database algebra for the G/A model is being developed at Texas Tech. This thesis describes the implementation of a data management system containing this algebra.Item Implementation of an extended relational model on a micro-computer system(Texas Tech University, 1984-08) Kan, LingThe design of an implementation of an extended relational data model for a micro-computer system is described. The extended data model is an enhancement of the relational model for design applications. Its principle feature is that it allows the attributes of a relation to contain complex, composite entities. An algebra to create and manipulate these attributes has been invented. We analyze various techniques for implementing this algebra and describe a particular implementation in detail.Item Incorporation of barcode capabilities to existing museum databases(Texas Tech University, 2000-12) Fishman-Armstrong, Susan E.An effective data management program saves time, money, and effort by increasing the accuracy, speed, and usefulness of the database. A bar code system is part of an effective data management system. Bar code capabilities were added at the Archeology department at the Panhandle-Plains Historical Museum (P-PHM), in Canyon, Texas (a PC environment), and the Paleontology Division at the Museum of Texas Tech University (MoTTU), in Lubbock, Texas (a Macintosh environment). A bar code generation utility was installed in the current databases and then used to print specimen labels. Before labels are printed, however, the collection's data must be proofread for erroneous data. The project consists of five Phases: (1) Database Correlation, (2) Upgrading the Database Management System, (3) Installing the Bar Code Utility, (4) Designing Views and Reports, and (5) Printing. The lasting effects ofthe project are increased control of collection management operations, expanded research capabilities, updated labels on archival paper, standardized labels and data, and automated generation of information tags.Item Incorporation of barcode capabilities to existing museum databases(Texas Tech University, 2000-12) Fishman-Armstrong, Susan E.NOT AVAILABLEItem On indexing large databases for advanced data models(2001-08) Samoladas, Vasilis; Miranker, Daniel P.Item Item Query languages for a heterogeneous temporal database(Texas Tech University, 1986-08) Yeung, Chung-singTime is an important dimension in databases. In a conventional database, out-of-ate information is usually replaced by current information to keep the database up-to-date. In many applications, discarding old information is inappropriate. A temporal database incorporates the notion of time. Objects in a temporal database are not deleted. On the contrary, they are retained and time-stamped to indicate their periods of existence in the real world. Over the last few years, a number of temporal relational database models have been proposed. This thesis reviews these models and proposes a heterogeneous model. We also develop a relational algebra and a tuple calculus for this model and prove their equivalence. The model and query languages capture the concept of "always" and "sometime" in natural language. In addition, they provide powerful operations with respect to temporal properties of information contained in tuples. When compared to existing approaches, fewer operations are needed to express complex queries. As a result, fewer intermediate scratch pads will be used during query execution. Consequently the space and time complexity introduced by the temporal dimension in temporal databases will be reduced.Item Relational database applications' optimization and performance study(Texas Tech University, 1998-08) Thiruvaipati, PrashanthThe objective of the thesis is to develop efficient query processing techniques for large relational database applications since, when such applications have to process more than a million records, performance becomes a key issue. Some techniques rely upon massive hardware architectures and new database software to improve efficiency of large database systems. One of the objectives of the thesis, however, is to develop optimization techniques using existing hardware and software. Performance improvement may be achieved by the use of parallel application processes that can process different fragments of a database at the same time. Further performance improvement is achieved by using dynamic SQL and simulating an SQL outer join in the 'C programming language. Simulating the SQL function MAX and proper locking mechanism resulted in marginal performance improvement. Database design to support the use of parallel application processes and the other techniques is presented. Applications are built to test the techniques and the performance results are presented and discussed. Multiple test cases are run for each technique to ensure that the results are similar in time. For each technique, the scenarios of maximum performance improvement, the underlying mechanism, and possible limitations are discussed.Item The development of inventory models for evaluating information systems(Texas Tech University, 1971-05) Boche, Raymond EliNot available