Browsing by Subject "Programming languages"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Ad hoc : overloading and language design(2010-08) Kilpatrick, Scott Lasater, 1984-; Allen, Eric, 1974-; Cook, William RandallThe intricate concepts of ad-hoc polymorphism and overloading permeate the field of programming languages despite their somewhat nebulous definitions. With the perspective afforded by the state of the art, object-oriented Fortress programming language, this thesis presents a contemporary account of ad-hoc polymorphism and overloading in theory and in practice. Common language constructs are reinterpreted with a new emphasis on overloading as a key facility. Furthermore, concrete problems with overloading in Fortress, encountered during the author's experience in the development of the language, are presented with an emphasis on the ad hoc nature of their solutions.Item Compiler issues related to iconic visual languages(Texas Tech University, 1997-12) Zhang, FangThis research studies the visualization of traditional programming languages. The major problem is how to represent traditional one-dimensional language in a twodimensional paradigm. The focus of this research is compiler issues, traditional compiler theories and algorithms, together with new concepts that are applied in the research. A multi-paradigm programming language environment was created to experiment the theoretical solution. The current target language is C+-I-, but the environment configuration can be applied to other traditional programming languages. In this particular environment, specific attention has been given to the user interface, data structures, parsing, and error handling. This research uses a new approach, and provides some possible solutions in the Visual Programming Language field.Item Dynamic software updates : a VM-centric approach(2010-05) Subramanian, Suriya; McKinley, Kathryn S.; Blackburn, Steve; Hicks, Michael; Lin, Calvin; Pingali, KeshavBecause software systems are imperfect, developers are forced to fix bugs and add new features. The common way of applying changes to a running system is to stop the application or machine and restart with the new version. Stopping and restarting causes a disruption in service that is at best inconvenient and at worst causes revenue loss and compromises safety. Dynamic software updating (DSU) addresses these problems by updating programs while they execute. Prior DSU systems for managed languages like Java and C# lack necessary functionality: they are inefficient and do not support updates that occur commonly in practice. This dissertation presents the design and implementation of Jvolve, a DSU system for Java. Jvolve's combination of flexibility, safety, and efficiency is a significant advance over prior approaches. Our key contribution is the extension and integration of existing Virtual Machine services with safe, flexible, and efficient dynamic updating functionality. Our approach is flexible enough to support a large class of updates, guarantees type-safety, and imposes no space or time overheads on steady-state execution. Jvolve supports many common updates. Users can add, delete, and change existing classes. Changes may add or remove fields and methods, replace existing ones, and change type signatures. Changes may occur at any level of the class hierarchy. To initialize new fields and update existing ones, Jvolve applies class and object transformer functions, the former for static fields and the latter for object instance fields. These features cover many updates seen in practice. Jvolve supports 20 of 22 updates to three open-source programs---Jetty web server, JavaEmailServer, and CrossFTP server---based on actual releases occurring over a one to two year period. This support is substantially more flexible than prior systems. Jvolve is safe. It relies on bytecode verification to statically type-check updated classes. To avoid dynamic type errors due to the timing of an update, Jvolve stops the executing threads at a DSU safe point and then applies the update. DSU safe points are a subset of VM safe points, where it is safe to perform garbage collection and thread scheduling. DSU safe points further restrict the methods that may be on each thread's stack, depending on the update. Restricted methods include updated methods for code consistency and safety, and user-specified methods for semantic safety. Jvolve installs return barriers and uses on-stack replacement to speed up reaching a safe point when necessary. While Jvolve does not guarantee that it will reach a DSU safe point, in our multithreaded benchmarks it almost always does. Jvolve includes a tool that automatically generates default object transformers which initialize new and changed fields to default values and retain values of unchanged fields in heap objects. If needed, programmers may customize the default transformers. Jvolve is the first dynamic updating system to extend the garbage collector to identify and transform all object instances of updated types. This dissertation introduces the concept of object-specific state transformers to repair application heap state for certain classes of bugs that corrupt part of the heap, and a novel methodology that employes dynamic analysis to automatically generate these transformers. Jvolve's eager object transformation design and implementation supports the widest class of updates to date. Finally, Jvolve is efficient. It imposes no overhead during steady-state execution. During an update, it imposes overheads to classloading and garbage collection. After an update, the adaptive compilation system will incrementally optimize the updated code in its usual fashion. Jvolve is the first full-featured dynamic updating system that imposes no steady-state overhead. In summary, Jvolve is the most-featured, most flexible, safest, and best-performing dynamic updating system for Java and marks a significant step towards practical support for dynamic updates in managed language virtual machines.Item Elixir: synthesis of parallel irregular algorithms(2015-05) Prountzos, Dimitrios; Pingali, Keshav; Misra, Jayadev; Batory, Don; Cook, William; Sagiv, Mooly; Gulwani, SumitAlgorithms in new application areas like machine learning and data analytics usually operate on unstructured sparse graphs. Writing efficient parallel code to implement these algorithms is very challenging for a number of reasons. First, there may be many algorithms to solve a problem and each algorithm may have many implementations. Second, synchronization, which is necessary for correct parallel execution, introduces potential problems such as data-races and deadlocks. These issues interact in subtle ways, making the best solution dependent both on the parallel platform and on properties of the input graph. Consequently, implementing and selecting the best parallel solution can be a daunting task for non-experts, since we have few performance models for predicting the performance of parallel sparse graph programs on parallel hardware. This dissertation presents a synthesis methodology and a system, Elixir, that addresses these problems by (i) allowing programmers to specify solutions at a high level of abstraction, and (ii) generating many parallel implementations automatically and using search to find the best one. An Elixir specification consists of a set of operators capturing the main algorithm logic and a schedule specifying how to efficiently apply the operators. Elixir employs sophisticated automated reasoning to merge these two components, and uses techniques based on automated planning to insert synchronization and synthesize efficient parallel code. Experimental evaluation of our approach demonstrates that the performance of the Elixir generated code is competitive to, and can even outperform, hand-optimized code written by expert programmers for many interesting graph benchmarks.Item Integrating programming languages and databases via program analysis and language design(2009-12) Wiedermann, Benjamin Alan; Cook, William Randall; Batory, Don; Blackburn, Stephen M.; Lin, Calvin; McKinley, Kathryn S.Researchers and practitioners alike have long sought to integrate programming languages and databases. Today's integration solutions focus on the data-types of the two domains, but today's programs lack transparency. A transparently persistent program operates over all objects in a uniform manner, regardless of whether those objects reside in memory or in a database. Transparency increases modularity and lowers the barrier of adoption in industry. Unfortunately, fully transparent programs perform so poorly that no one writes them. The goal of this dissertation is to increase the performance of these programs to make transparent persistence a viable programming paradigm. This dissertation contributes two novel techniques that integrate programming languages and databases. Our first contribution--called query extraction--is based purely on program analysis. Query extraction analyzes a transparent, object-oriented program that retrieves and filters collections of objects. Some of these objects may be persistent, in which case the program contains implicit queries of persistent data. Our interprocedural program analysis extracts these queries from the program, translates them to explicit queries, and transforms the transparent program into an equivalent one that contains the explicit queries. Query extraction enables programmers to write programs in a familiar, modular style and to rely on the compiler to transform their program into one that performs well. Our second contribution--called RBI-DB+--is an extension of a new programming language construct called a batch block. A batch block provides a syntactic barrier around transparent code. It also provides a latency guarantee: If the batch block compiles, then the code that appears in it requires only one client-server communication trip. Researchers previously have proposed batch blocks for databases. However, batch blocks cannot be modularized or composed, and database batch blocks do not permit programmers to modify persistent data. We extend database batch blocks to address these concerns and formalize the results. Today's technologies integrate the data-types of programming languages and databases, but they discourage programmers from using procedural abstraction. Our contributions restore procedural abstraction's use in enterprise applications, without sacrificing performance. We argue that industry should combine our contributions with data-type integration. The result would be a robust, practical integration of programming languages and databases.Item The effects of elaboration and placement of analogies on student learning and attitude toward basic programming using computer-assisted instruction(Texas Tech University, 1993-12) Lin, Shu-LingThe major purposes of this study were: (a) to determine if analogies, elaboration of the analogies, and placement of the analogies help novices learning a computer programming language and affect their attitude toward learning a programming language in a computer-based learning environment; (b) to determine if the students' mathematics ability influences programming learning and their attitude toward a programming language; (c) to determine if students with relatively average-ability level in mathematics can benefit from analogies, elaboration of the analogies, or placement of the analogies; and (d) to determine if students with relatively high-ability levels in terms of SAT or ACT quantitative scores are affected through the use of analogies, elaboration of the analogies, or placement of the analogies. Subjects were 156 college students from two summer courses in computing and information technology in the College of Education at a state university in northwest Texas. Students received their respective computer-assisted instruction lesson, which differed according to their treatment condition. Results indicated that learning with analogies does significantly improve concept recall. No significant results were obtained for elaboration and placement of the analogy. Relatively high-ability students in terms of their mathematics score did achieve better than relatively average-ability students. Students with different ability levels did not significantly benefit from different treatments. However, the effect of analogies and elaboration approached significance for relatively averageability students.