Browsing by Subject "Computer software--Verification"
Now showing 1 - 8 of 8
Results Per Page
Sort Options
Item Deductive mechanical verification of concurrent systems(2005) Sumners, Robert W.; Abraham, Jacob A.Society depends critically on the correct and efficient execution of computer programs. The requirements of correctness and efficiency are not independent or complimentary. The goal of efficiency often engenders complexity in the design, definition, and execution of computer programs which complicates the analysis of correctness. Modern computing systems support increased performance by providing multiple execution units operating in parallel and efficient programs utilize this support for parallel execution by decomposing computation sequences into multiple concurrent interacting threads of execution. The concurrency introduced in these programs exacerbates the difficulty of verifying the correct operation of the programs since it greatly increases the possible states the threaded execution may reach. In this dissertation, we propose the use of theorem proving for proving the correctness of concurrent programs. Specifically, we present specification methodologies and tool support for proving the correctness of concurrent programs using the theorem prover ACL2. We present the use of stuttering refinement as a means for specifying the correctness of a concurrent program implementation by relating the execution traces of the implementation with the behaviors of a much simpler specification program. Stuttering refinement allows one to hide spatial and temporal details in the implementation program in the relation with the specification while still ensuring that the implementation makes progress to matching steps in the specification. We also present reductions of the proof requirements for stuttering refinement to facilitate the proof of refinements with the theorem prover ACL2. The theorem prover ACL2 is a general-purpose theorem prover and the theorems which arise in stuttering refinement proofs tend to stress the ability of ACL2 to efficiently discover proofs. We extend ACL2 with an efficient and elegant simpli- fier which significantly increases the performance of ACL2 in handling the theorems which arise in refinement proofs. This simplifier is named KAS and we present the definition of KAS along with an argument of its soundness and limited completeness. The KAS simplifier has been designed to be extended by the user and would have application beyond the focus on stuttering refinement proofs in this dissertation. Even with the adoption of KAS, the user is burdened with the definition of inductive invariants which may require significant experienced user interaction in order to develop the appropriate definitions. We present a separate tool which uses user-guided term rewriting (extended by theorems proven in ACL2) to generate an abstract model suitable for proofs of invariants. We present the details of this procedure along with its application to several example programs. We conclude the dissertation with a proposal for the development of an integrated compiler and prover intended to avoid the specification and proof requirements for concurrent programs by automating the construction of concurrent programs which are proven correct by construction.Item Exploiting replication in automated program verification(2007) Wahl, Thomas, 1973-; Emerson, E. AllenThis dissertation shows how systems of many concurrent components, which naively engender intractably large state spaces, can nevertheless be successfully subject to exhaustive formal verification, provided the components can be classified into a few types. It therefore addresses an instance of the state explosion problem: a finite-state model of a system can be much larger than a high-level description of this system. Model checking, the technique to which this dissertation is primarily devoted, inherently relies on state space exploration and thus suffers from this problem more than other formal verification methods. The state explosion phenomenon persists even if the system consists of components that are simply replicated instances of a generic behavioral description. Examples of such systems abound; they include processes executing concurrently according to some common protocol, clusters of processors executing a parallel provii gram, and hardware with replicated physical devices in a uniform arrangement. Fortunately, models of such systems often exhibit a regular structure, known as symmetry, which can be exploited in verification, sometimes to the extent of an exponential reduction in model size. The first contribution of this dissertation is to show how reductions based on symmetry can be performed with state-of-the-art system representations. Many of today’s computing systems induce astronomically large state spaces whose formal models require a symbolic encoding using boolean formulas. Such succinct representations call for new algorithms that can process entire sets of objects at once. How to detect symmetry quickly during symbolic model checking, so that redundancy in the exploration can be avoided, was an open problem for some time. In this dissertation we provide an efficient and flexible solution to this problem by using symbolic data structures in a somewhat unconventional way. The second contribution is to extend symmetry reduction techniques to more realistic and general scenarios. We establish that the principal ideas still apply if symmetry is violated in parts of the state space. Such scenarios are common in practice, for instance when priorities determine which of several competing processes can access a resource first. In these situations it is beneficial to exploit symmetry where it exists and watch out for the (few) violations, rather than to ignore it altogether. We also demonstrate how symmetry can help us solve a practically significant instance of parameterized verification of system families. This technique traditionally attempts to prove properties about systems independently of the size parameter, but requires models of a special structure. We show that by restricting the parameter to a finite range we can solve this problem efficiently, can do so without any conditions on the models’ structure, and can take advantage of symmetry in the individual systems of the family if it exists.Item Formal specification and verification of a JVM and its bytecode verifier(2006) Liu, Hanbing; Moore, J Strother, 1947-How do we know that a bytecode-verified Java program will run safely? This dissertation addresses the question by building a precise model of the JVM and its bytecode verifier. We also built a “small” machine and its bytecode verifier to demonstrate an approach to solving this problem. We proved that for any program on the small machine that has been vetted by the small bytecode verifier, then that program will run safely on the small machine. We created substantial libraries of ACL2 definitions and lemmas towards specifying and proving that the JVM safely executes verified programs. The fundamental problem is to connect the abstract execution of the bytecode verifier with the concrete execution of the JVM. These diverge in two ways: (1) the bytecode verifier executes on more abstract states and (2) its execution of INVOKE-family and BRANCH-family instructions differs from their execution by the JVM. Our contribution was identification of a critical “on-track” property that, despite these divergences between the bytecode verifier and the JVM, enables one to use the success of bytecode verification to predict the safety of concrete execution. The second difficulty is that the official specification describes many “procedural” aspects of the bytecode verification process. These aspects obscure the checks conducted by the bytecode verifier. We introduce an alternative bytecode verifier without such “procedural” aspects. We use the new bytecode verifier as a stepping stone for proving that the official bytecode verifier is effective. Following this methodology allowed us to prove, on our ”small” machine, that executions of bytecode-verified programs never overflow the operand stack. We note that significant effort is required in order to extend this result from our “small” machine to the full JVM. We have formulated appropriate stronger notions of “safe” execution for programs on the full JVM. We introduced an alternative bytecode verifier. We proved the “reduction theorem” that relates the official bytecode verifier with the alternative bytecode veri- fier. We completed proofs of several thousand lemmas towards proving the “safe” execution of bytecode-verified programs on the full JVM. Our results are organized into supporting lemma libraries.Item Integration of model checking into software development processes(2004) Xie, Fei; Browne, James C.Testing has been the dominant method for validation of software systems. As software systems become complex, conventional testing methods have become inadequate. Model checking is a powerful formal verification method. It supports systematic exploration of all states or execution paths of the system being verified. There are two major challenges in practical and scalable application of model checking to software systems: (1) the applicability of model checking to software systems and (2) the intrinsic complexity of model checking. In this dissertation, we have developed a comprehensive approach to integration of model checking into two emerging software development processes: ModelDriven Development (MDD) and Component-Based Development (CBD), and a combination of MDD and CBD. This approach addresses the two major challenges under the following framework: (1) bridging applicability gaps through automatic translation of software representations to directly model-checkable formal representations, (2) seamless integration of state space reduction algorithms in the translation through static analysis, and (3) scaling model checking capability and achieving state space reduction by systematically exploring compositional structures of software systems. We have integrated model checking into MDD by applying mature model checking techniques to industrial design-level software representations through automatic translation of these representations to the input formal representations of model checkers. We have developed a translation-based approach to compositional reasoning of software systems, which simplifies the proof, implementation, and application of compositional reasoning rules at the software system level by reusing the proof and implementation of existing compositional reasoning rules for directly model-checkable formal representations. We have developed an integrated state space reduction framework which systematically conducts a top-down decomposition of a large and complex software system into directly model-checkable components by exploring domain-specific knowledge. We have designed, implemented, and applied a bottom-up approach to model checking of component-based software systems, which composes verified systems from verified components and integrates model checking into CBD. We have further scaled model checking of componentbased systems by exploring the synergy between MDD and CBD, i.e., specifying components in executable design languages, and realizing the bottom-up approach based on model checking of software designs through translation.Item Sequential redundancy identification using transformation-based verification(2008-05) Mony, Hari, 1977-; Aziz, AdnanThe design of complex digital hardware is challenging and error-prone. With short design cycles and increasing complexity of designs, functional verification has become the most expensive and time-consuming aspect of the digital design process. Sequential equivalence checking (SEC) has been proposed as a verification framework to perform a true sequential check of input/output equivalence between two designs. SEC provides several benefits that can enable a faster and more efficient way to design and verify large and complex digital hardware. It can be used to prove that micro-architectural optimizations needed for design closure preserve design functionality, and thus avoid the costly and incomplete functional verification regression traditionally used for such purposes. Moreover, SEC can be used to validate sequential synthesis transformations and thereby enable design and verification at a higher-level of abstraction. Use of sequential synthesis leads to shorter design cycles and can result in a significant improvement in design quality. In this dissertation, we study the problem of sequential redundancy identification to enable robust sequential equivalence checking solutions. In particular, we focus on the use of a transformation-based verification framework to synergistically leverage various transformations to simplify and decompose large problems which arise during sequential redundancy identification to enable an efficient and highly scalable SEC solution. We make five main contributions in this dissertation. First, we introduce a novel sequential redundancy identification framework that dramatically increases the scalability of SEC. Second, we propose the use of a flexible and synergistic set of transformation and verification algorithms for sequential redundancy identification. This more general approach enables greater speed and scalability and identifies a significantly greater degree of redundancy than previous approaches. Third, we introduce the theory and practice of transformation-based verification in the presence of constraints. Constraints are pervasively used in verification testbenches to specify environmental assumptions to prevent illegal input scenarios. Fourth, we develop the theoretical framework with corresponding efficient implementation for optimal sequential redundancy identification in the presence of constraints. Fifth, we address the scalability of transformation-based verification by proposing two new structural abstraction techniques. We also study the synergies between various transformation algorithms and propose new strategies for using these transformations to enable scalable sequential redundancy identification.Item Testing concurrent software systems(2006) Kilgore, Richard Brian; Chase, Craig M.Two approaches to testing concurrent software are presented. In the first, a system is assumed to contain a deterministic computation when correct, and I describe two testing algorithms to optimally achieve coverage of a testing metric involving racing pairs of messages. In the second approach, the system model is improved to allow additional nondeterministic behavior when it is either commutative in nature or localized in its effect. I present two sets of algorithms. The first detects whether or not a system is deterministic (i.e., no race conditions affect the computation’s outcome). The second algorithm identifies localized non-determinism, and can be used to determine whether or not a system converges to a deterministic end.Item Three essays on the interface of computer science, economics and information systems(2007) Hidvégi, Zoltán Tibor, 1970-; Whinston, Andrew B.This thesis looks at three aspects related to the design of E-commerce systems, online auctions and distributed grid computing systems. We show how formal verification techniques from computer science can be applied to ensure correctness of system design and implementation at the code level. Through e-ticket sales example, we demonstrate that model checking can locate subtle but critical flaws that traditional control and auditing methods (e.g., penetration testing, analytical procedure) most likely miss. Auditors should understand formal verification methods, enforce engineering to use them to create designs with less of a chance of failure, and even practice formal verification themselves in order to offer credible control and assistance for critical e-systems. Next, we study why many online auctions offer fixed buy prices to understand why sellers and auctioneers voluntarily limit the surplus they can get from an auction. We show when either the seller of the dibbers are risk-averse, a properly chosen fixed permanent buy-price can increase the social surplus and does not decrease the expected utility of the sellers and bidders, and we characterize the unique equilibrium strategies of uniformly risk-averse buyers in a buy-price auction. In the final chapter we look at the design of a distributed grid-computing system. We show how code-instrumentation can be used to generate a witness of program execution, and show how this witness can be used to audit the work of self-interested grid agents. Using a trusted intermediary between grid providers and customers, the audit allows payment to be contingent on the successful audit results, and it creates a verified reputation history of grid providers. We show that enabling the free trade of reputations provides economic incentives to agents to perform the computations assigned, and it induces increasing effort levels as the agents' reputation increases. We show that in such a reputation market only high-type agents would have incentive to purchase a high reputation, and only low-type agents would use low reputations, thus a market works as a natural signaling mechanism about the agents' type.Item Using theorem proving and algorithmic decision procedures for large-scale system verification(2005) Ray, Sandip; Moore, J Strother, 1947-