Browsing by Subject "Formal methods"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Efficient verification of packet networks(2015-12) Yang, Hongkun; Lam, Simon S., 1947-; Emerson, Ernest A.; Garg, Vijay K.; Gouda, Mohamed G.; Mok, Aloysius K.Network management will benefit from automated tools based upon formal methods. In these tools, the algorithm for computing reachability is the core algorithm for verifying network properties in the data plane. This dissertation presents efficient algorithms for computing reachability and verifying network properties for a single network with both packet filters and transformers, and for interconnected networks. For computing port to port reachability in a network, we present a new formal method for a new tool, Atomic Predicates (AP) Verifier, which is much more time and space efficient than existing tools. Given a set of predicates representing packet filters, AP Verifier computes a set of atomic predicates, which is minimum and unique. The use of atomic predicates dramatically speeds up computation of network reachability. AP Verifier also includes algorithms to process network update events and check compliance with network policies and properties in real time. Packet transformers are widely used in Internet service provider networks, datacenter infrastructures, and layer-2 networks. Existing network verification tools do not scale to such networks with large numbers of different transformers. We present a new tool, AP+ Verifier, based upon a new algorithm for computing atomic predicates for networks with both packet filters and transformers. For performance evaluation, we use network datasets with different types of transformers (i.e., MPLS tunnels, IP-in-IP tunnels, and NATs). We found that AP+ Verifier is more time and space efficient than prior tools by orders of magnitude. The Internet consists a large collection of networks. To debug reachability problems, a network operator often asks operators of other networks for help by telephone or email. We present a new protocol, COVE, and an efficient data structure for automating the exchange of data plane reachability information between networks in a business relationship. COVE is designed to improve a network's views of forward and reverse reachability with partial deployment in mind. COVE is scalable to very large networks in the Internet. We illustrate applications of COVE to perform useful network management tasks, which cannot be done effectively using existing methods and tools.Item Efficient, mechanically-verified validation of satisfiability solvers(2015-05) Wetzler, Nathan David; Hunt, Warren A., 1958-; Heule, Marijn J. H.; Biere, Armin; Lifschitz, Vladimir; Moore, J Strother; Ramachandran, VijayaSatisfiability (SAT) solvers are commonly used for a variety of applications, including hardware verification, software verification, theorem proving, debugging, and hard combinatorial problems. These applications rely on the efficiency and correctness of SAT solvers. When a problem is determined to be unsatisfiable, how can one be confident that a SAT solver has fully exhausted the search space? Traditionally, unsatisfiability results have been expressed using resolution or clausal proof systems. Resolution-based proofs contain perfect reconstruction information, but these proofs are extremely large and difficult to emit from a solver. Clausal proofs rely on rediscovery of inferences using a limited number of techniques, which typically takes several orders of magnitude longer than the solving time. Moreover, neither of these proof systems has been able to express contemporary solving techniques such as bounded variable addition. This combination of issues has left SAT solver authors unmotivated to produce proofs of unsatisfiability. The work from this dissertation focuses on validating satisfiability solver output in the unsatisfiability case. We developed a new clausal proof format called DRAT that facilitates compact proofs that are easier to emit and capable of expressing all contemporary solving and preprocessing techniques. Furthermore, we implemented a validation utility called DRAT-trim that is able to validate proofs in a time similar to that of the discovery time. The DRAT format has seen widespread adoption in the SAT community and the DRAT-trim utility was used to validate the results of the 2014 SAT Competition. DRAT-trim uses many advanced techniques to realize its performance gains, so why should the results of DRAT-trim be trusted? Mechanical verification enables users to model programs and algorithms and then prove their correctness with a proof assistant, such as ACL2. We designed a new modeling technique for ACL2 that combines efficient model execution with an agile and convenient theory. Finally, we used this new technique to construct a fast, mechanically-verified validation tool for proofs of unsatisfiability. This research allows SAT solver authors and users to have greater confidence in their results and applications by ensuring the validity of unsatisfiability results.Item Theory and techniques for synthesizing efficient breadth-first search algorithms(2012-08) Nedunuri, Srinivas; Cook, William Randall; Batory, Don; Baxter, Ira; Pingali, Keshav; Smith, Douglas R.The development of efficient algorithms to solve a wide variety of combinatorial and planning problems is a significant achievement in computer science. Traditionally each algorithm is developed individually, based on flashes of insight or experience, and then (optionally) verified for correctness. While computer science has formalized the analysis and verification of algorithms, the process of algorithm development remains largely ad-hoc. The ad-hoc nature of algorithm development is especially limiting when developing algorithms for a family of related problems. Guided program synthesis is an existing methodology for systematic development of algorithms. Specific algorithms are viewed as instances of very general algorithm schemas. For example, the Global Search schema generalizes traditional branch-and-bound search, and includes both depth-first and breadth-first strategies. Algorithm development involves systematic specialization of the algorithm schema based on problem-specific constraints to create efficient algorithms that are correct by construction, obviating the need for a separate verification step. Guided program synthesis has been applied to a wide range of algorithms, but there is still no systematic process for the synthesis of large search programs such as AI planners. Our first contribution is the specialization of Global Search to a class we call Efficient Breadth-First Search (EBFS), by incorporating dominance relations to constrain the size of the frontier of the search to be polynomially bounded. Dominance relations allow two search spaces to be compared to determine whether one dominates the other, thus allowing the dominated space to be eliminated from the search. We further show that EBFS is an effective characterization of greedy algorithms, when the breadth bound is set to one. Surprisingly, the resulting characterization is more general than the well-known characterization of greedy algorithms, namely the Greedy Algorithm parametrized over algebraic structures called greedoids. Our second contribution is a methodology for systematically deriving dominance relations, not just for individual problems but for families of related problems. The techniques are illustrated on numerous well-known problems. Combining this with the program schema for EBFS results in efficient greedy algorithms. Our third contribution is application of the theory and methodology to the practical problem of synthesizing fast planners. Nearly all the state-of-the-art planners in the planning literature are heuristic domain-independent planners. They generally do not scale well and their space requirements also become quite prohibitive. Planners such as TLPlan that incorporate domain-specific information in the form of control rules are orders of magnitude faster. However, devising the control rules is labor-intensive task and requires domain expertise and insight. The correctness of the rules is also not guaranteed. We introduce a method by which domain-specific dominance relations can be systematically derived, which can then be turned into control rules, and demonstrate the method on a planning problem (Logistics).Item Tools for semi-formal proofs(2008-08) Nachiappan, Nachiappan V.; Rushton, J. Nelson; Sinzinger, Eric D.We propose a semi-formal proof structure, in which the overall arrangement of a proof is a formal data structure, while individual propositions within the structure are expressed either formally or informally. We hypothesize that such proofs are easier to check accurately, compared with standard informal prose proofs. Our framework is conceptually simple, and proofs written in it look similar to informal proofs. We also give the definitions and the main theorem on the correctness of our structure-checking algorithm. The theorem is essentially that the proofs accepted by our algorithm are indeed correct. The thesis also includes an empirical study which involves experimentation of our proposed version of semi-formal proof structure in comparison to traditionally written prose proofs. If our hypothesis is correct, the system should be easier for students to learn, and yield intuitions that can be more effectively leveraged toward reading actual proofs in computer science literature.Item A verified framework for symbolic execution in the ACL2 theorem prover(2010-12) Swords, Sol Otis; Hunt, Warren A., 1958-; Baumgartner, Jason R.; Boyer, Robert S.; Cook, William; Moore, J S.Mechanized theorem proving is a promising means of formally establishing facts about complex systems. However, in applying theorem proving methodologies to industrial-scale hardware and software systems, a large amount of user interaction is required in order to prove useful properties. In practice, the human user tasked with such a verification must gain a deep understanding of the system to be verified, and prove numerous lemmas in order to allow the theorem proving program to approach a proof of the desired fact. Furthermore, proofs that fail during this process are a source of confusion: the proof may either fail because the conjecture was false, or because the prover required more help from the user in order to reach the desired conclusion. We have implemented a symbolic execution framework inside the ACL2 theorem prover in order to help address these issues on certain problem domains. Our framework introduces a proof strategy that applies bit-level symbolic execution using BDDs to finite-domain problems. This proof strategy is a fully verified decision procedure for such problems, and on many useful problem domains its capacity vastly exceeds that of exhaustive testing. Our framework also produces counterexamples for conjectures that it determines to be false. Our framework seeks to reduce the amount of necessary user interaction in proving theorems about industrial-scale hardware and software systems. By increasing the automation available in the prover, we allow the user to complete useful proofs while understanding less of the detailed implementation of the system. Furthermore, by producing counterexamples for falsified conjectures, our framework reduces the time spent by the user in trying to determine why a proof failed.Item Weak-memory local reasoning(2012-12) Wehrman, Ian Anthony; Hunt, Warren A., 1958-; Moore, J Strother, 1947-; Berdine, Josh; Hoare, C. A. R.; Emerson, E. Allen; Fussell, Donald SProgram logics are formal logics designed to facilitate specification and correctness reasoning for software programs. Separation logic, a recent program logic for C-like programs, has found great success in automated verification due in large part to its embodiment of the principle of local reasoning, in which specifications and proofs are restricted to just those resources—variables, shared memory addresses, locks, etc.—used by the program during execution. Existing program logics make the strong assumption that all threads agree on the values of shared memory at all times. But, on modern computer architectures, this assumption is unsound for certain shared-memory concurrent programs: namely, those with races. Typically races are considered to be errors, but some programs, like lock-free concurrent data structures, are necessarily racy. Verification of these difficult programs must take into account the weaker models of memory provided by the architectures on which they execute. This dissertation project seeks to explicate a local reasoning principle for x86-like architectures. The principle is demonstrated with a new program logic for concurrent C-like programs that incorporates ideas from separation logic. The goal of the logic is to allow verification of racy programs like concurrent data structures for which no general-purpose high-level verification techniques exist.