Accepted Papers

Session 109A: Software Verification

Jean-Christophe Filliatre, Léon Gondelman and Andrei Paskevich
The Spirit of Ghost Code

In the context of deductive program verification, ghost code is part of the program that is added for the purpose of specification. Ghost code must not interfere with regular code, in the sense that it can be erased without any observable difference in the program outcome. In particular, ghost data cannot participate in regular computations and ghost code cannot mutate regular data or diverge. The idea exists in the folklore since the early notion of auxiliary variables and is implemented in many state-of-the-art program verification tools. However, a rigorous definition and treatment of ghost code is surprisingly subtle and few formalizations exist.

In this article, we describe a simple ML-style programming language with mutable state and ghost code. Non-interference is ensured by a type system with effects, which allows, notably, the same data types and functions to be used in both regular and ghost code. We define the procedure of ghost code erasure and we prove its safety using bisimulation. A similar type system, with numerous extensions which we briefly discuss, is implemented in the program verification environment Why3.

video
Anvesh Komuravelli, Arie Gurfinkel and Sagar Chaki
SMT-based Model Checking for Recursive Programs

We present an SMT-based symbolic model checking algorithm for safety verification of recursive programs. The algorithm is modular and analyzes procedures individually. Unlike other SMT-based approaches, it maintains both “over-” and “under-approximations” of procedure summaries. Under-approximations are used to propagate information over procedure calls. Over-approximations are used to block infeasible counterexamples and detect convergence. We show that for programs and properties over a decidable theory, the algorithm is guaranteed to find a counterexample, if one exists. However, efficiency depends on an oracle for quantifier elimination (QE). For Boolean Programs, the algorithm is a polynomial decision procedure, matching the worst-case bounds of the best BDD-based algorithms. For Linear Arithmetic (integers and rationals), we give an efficient instantiation of the algorithm by applying QE “lazily”. We use existing interpolation techniques to over-approximate QE and introduce “Model-Based Projection” to under-approximate QE. Empirical evaluation on SV-COMP benchmarks shows that our algorithm improves significantly on the state-of-the-art.

video
Shachar Itzhaky, Nikolaj Bjorner, Thomas Reps, Mooly Sagiv and Aditya Thakur
Property-Directed Shape Analysis

This paper addresses the problem of automatically generating quantified invariants for programs that manipulate singly and doubly linked-list data structures. Our algorithm is property-directed — i.e., its choices are driven by the properties to be proven. The algorithm is able to establish that a correct program has no memory-safety violations — i.e., there are no null-pointer dereferences, no memory leaks, etc. — and that data-structure invariants are preserved. For programs with errors, the algorithm produces concrete counterexamples.

More broadly, the paper describes how to integrate IC3/PDR with full predicate abstraction. The analysis method is complete in the following sense: if an inductive invariant that proves that the program satisfies a given property is expressible as a Boolean combination of a given set of predicates, then the analysis will find such an invariant. To the best of our knowledge, this method represents the first shape-analysis algorithm that is capable of (i) reporting concrete counterexamples, or alternatively (ii) establishing that the predicates in use is not capable of proving the property in question.

video
Quang Loc Le, Cristian Gherghina, Shengchao Qin and Wei-Ngan Chin
Shape Analysis via Second-Order Bi-Abduction

We present a new modular shape analysis that can synthesize heap memory specification on a per method basis. We rely on a second-order bi-abduction mechanism that can give interpretations to unknown shape predicates. There are several novel features in our shape analysis. Firstly, it is grounded on second-order bi-abduction. Secondly, we distinguish unknown pre-predicates in pre-conditions, from unknown post-predicates in post-condition; since the former may be strengthened, while the latter may be weakened. Thirdly, we provide a new heap guard mechanism to support more precise preconditions for heap specification. Lastly, we formalise a set of derivation and normalization rules to give concise definitions for unknown predicates. Our approach has been proven sound and is implemented on top of an existing automated verification system. We show its versatility in synthesizing a wide range of intricate shape specifications.

video
Pranav Garg, Christof Loding, P. Madhusudan and Daniel Neider
ICE: A Robust Learning Framework for Synthesizing Invariants

We introduce a new paradigm for using black-box learning to synthesize invariants called ICE-learning that learns using examples, counter-examples, and implications, and show that it allows building honest teachers and convergent mechanisms for invariant synthesis. We observe that existing algorithms for black-box abstract interpretation can be interpreted as monotonic ICE learning algorithms, develop two classes of new non-monotonic ICE-learning domains, one for learning numerical invariants for scalar variables and one for quantified invariants for arrays and dynamic lists, and establish convergence results for them. We implement these ICE learning algorithms in a prototype verifier and show that the robustness that it brings is practical and effective.

video
Rahul Sharma and Alex Aiken
From Invariant Checking to Invariant Inference Using Randomized Search

We describe a general framework C2I for generating an invariant inference procedure from an invariant checking procedure. Given a checker and a language of possible invariants, C2I generates an inference procedure that iteratively invokes two phases. The search phase uses randomized search to discover candidate invariants and the validate phase uses the checker to either prove or refute that the candidate is an actual invariant. To demonstrate the applicability of C2I, we use it to generate inference procedures that prove safety properties of numerical programs, prove non-termination of numerical programs, prove functional specifications of array manipulating programs, prove safety properties of string manipulating programs, and prove functional specifications of heap manipulating programs that use linked list data structures. We show that performance is competitive with state-of-the-art invariant inference techniques for more specialized domains. To the best of our knowledge, this work is the first to demonstrate such a generally applicable invariant inference technique.

video
Zvonimir Rakamaric and Michael Emmi
SMACK: Decoupling Source Language Details from Verifier Implementations

A major obstacle to putting software verification research into practice is the high cost of developing the infrastructure enabling the application of verification algorithms to actual production code, in all of its complexity. Handling an entire programming language is a huge endeavor which few researchers are willing to undertake, and even fewer could invest the effort to implement their verification algorithms for many source languages. To decouple the implementations of verification algorithms from the details of source languages, and enable rapid prototyping on production code, we have developed SMACK. At its core, SMACK is a translator from the LLVM compiler’s popular intermediate representation (IR) into the Boogie intermediate verification language (IVL). Sourcing LLVM IR exploits an increasing number of compiler frontends, optimizations, and analyses. Targeting Boogie exploits a canonical platform which simplifies the implementation of algorithms for verification, model checking, and abstract interpretation. Our initial experience verifying C language programs is encouraging: SMACK is competitive in SV-COMP benchmarks, is able to translate large programs (100 KLOC), and is used in several verification research prototypes.

video

Session 113A: Security

Hassan Eldib and Chao Wang
Synthesis of Masking Countermeasures against Side Channel Attacks

When mathematicians and computer scientists design cryptographic algorithms, they assume that sensitive information can be manipulated in a closed computing environment. Unfortunately, real computers and microchips leak information about the software code they execute, e.g. through power dissipation and electromagnetic radiation. Such side channel leaks has been exploited in many commercial systems in the embedded space. In this paper, we propose a counterexample guided inductive synthesis method to generating software countermeasures for implementations of cryptographic algorithms. Our new method guarantees that the resulting software code is “perfectly masked”, that is, all intermediate computation results are statistically independent from the secret data. We have implemented our method based on the LLVM compiler and the Yices SMT solver. Our experiments on a set of cryptographic software code show that the new method is both effective in eliminating side channel leaks and scalable for practical use.

video
Omar Chowdhury, Limin Jia, Deepak Garg and Anupam Datta
Temporal Mode-Checking for Runtime Monitoring of Privacy Policies

Fragments of first-order temporal logic are useful for representing many practical privacy and security policies. Past work has proposed two strategies for checking event trace (audit log) compliance with policies: online monitoring and offline audit. Although online monitoring is space- and time-efficient, existing techniques insist that satisfying instances of all subformulas of the policy be amenable to caching, which limits expressiveness when some subformulas have infinite support. In contrast, offline audit is brute-force and can handle such policies but is not as efficient. This paper proposes a new online monitoring algorithm that caches satisfying instances when it can, and falls back to the brute-force search when it cannot. Our key technical insight is a new static check, called the temporal mode check, which determines subformulas for which such caching is feasible and those for which it is not and, hence, guides our algorithm. We prove the correctness of our algorithm, and evaluate its performance over synthetic traces and realistic policies.

video
Parosh Aziz Abdulla, Mohamed Faouzi Atig, Yu-Fang Chen, Lukas Holik, Ahmed Rezine, Philipp Reummer and Jari Stenman
String Constraints for Verification

We consider the problem of verifying programs operating on strings, in combination with other datatypes like arithmetic or heap. As a tool to check verification conditions, analyse feasibility of execution paths, or build finite abstractions, we present a new decision procedure for a rich logic including strings. Main applications of our framework include, in particular, detection of vulnerabilities of web applications such as SQL injections and cross-site scripting.

video
Sudeep Kanav, Peter Lammich and Andrei Popescu
A Conference Management System with Verified Document Confidentiality

We present a case study in verified security for realistic systems: the implementation of a conference management system, whose functional kernel is faithfully represented in the Isabelle theorem prover, where we specify and verify confidentiality properties. The various theoretical and practical challenges posed by this development led to a novel security model and verification method generally applicable to systems describable as input-output automata.

video
Anna Lisa Ferrara, P. Madhusudan, Truc L. Nguyen and Gennaro Parlato
VAC – Verifier of Administrative Role-based Access Control Policies

In this paper we present VAC an automatic tool for verifying security properties of administrative Role-based Access Control (RBAC). RBAC has become an increasingly popular access control model, particularly suitable for large organizations, and it is implemented in several software. Automatic security analysis of administrative RBAC systems is recognized as an important problem, as an analysis tool can help designers to check whether their policies meet expected security properties. VAC converts administrative RBAC policies to imperative programs that simulate the policies both precisely and abstractly and supports several automatic verification back-ends to analyze the resulting programs. In this paper, we describe the architecture of VAC and overview the analysis techniques that have been implemented in the tool. We also report on experiments with several benchmarks from the literature.

video

Session 116A: Automata

Javier Esparza and Jan Kretinsky
From LTL to Deterministic Automata: A Safraless Compositional Approach

We present a new algorithm to construct a deterministic Rabin automaton for an LTL formula $\varphi$. The automaton is the product of a master automaton and an array of slave automata, one for each $\G$-subformula of $\varphi$. The slave automaton for $\G\psi$ is in charge of recognizing whether $\F\G\psi$ holds, As opposed to standard determinization procedures, the states of all our automata have a clear logical structure, which allows to apply various optimizations. Our construction subsumes former algorithms for fragments of LTL. Experimental results show improvement in the sizes of the resulting automata compared to existing methods.

video
Loris D’Antoni and Rajeev Alur
Symbolic Visibly Pushdown Automata

Nested words model data with both linear and hierarchical structure such as XML documents and program traces. A nested word is a sequence of positions together with a matching relation that connects open tags (calls) with the corresponding close tags (returns). Visibly Pushdown Automata are a restricted class of pushdown automata that process nested words, and have many appealing theoretical properties such as closure under Boolean operations and decidable equivalence. However, like any classical automata models, they are limited to finite alphabets. This limitation is restrictive for practical applications to both XML processing and program trace analysis, where values for individual symbols are usually drawn from an unbounded domain. With this motivation, we introduce Symbolic Visibly Pushdown Automata (SVPA) as an executable model for nested words over infinite alphabets. In this model, transitions are labeled with first-order predicates over the input alphabet, analogous to symbolic automata processing strings over infinite alphabets. A key novelty of SVPAs is the use of binary predicates to model relations between open and close tags in a nested word. We show how SVPAs still enjoy the decidability and closure properties of Visibly Pushdown Automata. We use SVPAs to model XML validation policies and program properties that are not naturally expressible with previous formalisms and provide experimental results on the performance of our implementation.

video

Session 127C: Model Checking and Testing

Ethel Bardsley, Adam Betts, Nathan Chong, Peter Collingbourne, Pantazis Deligiannis, Alastair Donaldson, Jeroen Ketema, Dan Liew and Shaz Qadeer
Engineering a Static Verification Tool for GPU Kernels

We report on practical experiences over the last 2.5 years related to the engineering of GPUVerify, a static verification tool for GPU kernels written in OpenCL and CUDA. We have published papers on the top-level research ideas underpinning GPUVerify, but embedding theses ideas in a tool that can be applied to real-world examples with a reasonable degree of efficiency and automation has required significant optimisation effort and a number of important engineering decisions. We describe these decisions and optimisations, and demonstrate their effectiveness using data experimental gathered for a set of more than 500 benchmarks, encompassing all the major publicly available benchmark suites and a commercial benchmark suite. Our experiments plot the progress of GPUVerify, starting from a sound concept that did not scale well in practice, through a series of improvements and optimisations, to a fully functional and relatively efficient analysis tool. We also report on our efforts to encourage uptake of the tool by industry through features to reduce the rate of false positives and provide clear error messages when verification fails. Finally, we describe our engagement with industry to promote usage of the tool beyond academia and the fruitful feedback this has brought. Our hope is that this engineering and experience report will serve the verification community by helping to inform future tooling efforts.

video
Kenneth McMillan
Lazy Annotation Revisited

Lazy Annotation is a method of software model checking that performs a backtracking search for a symbolic counterexample. When the search backtracks, the program is annotated with a learned fact that constrains future search. In this sense, the method is closely analogous to conflict-driven clause learning in Boolean satisfiability solvers. In this paper, we develop several improvements to the basic lazy annotation approach. These include a decision heuristic, a form of non-chronological backtracking, and an adaptation to large-block encodings that exploits the power of modern satisfiability modulo theories solvers.

The resulting algorithm is compared both theoretically and experimentally to two approaches that rely on similar principles but use different learning strategies: unfolding-based bounded model checking and property-driven reachability. The lazy annotation approach is seen to be significantly more robust over a collection of benchmarks from the SV-COMP 2013 software model checking competition and the Microsoft Static Driver Verifier regression suite.

video
Yakir Vizel and Arie Gurfinkel
Interpolating Property Directed Reachability

Current SAT-based Model Checking is based on two major approaches: Interpolation-based (IMC) (global, with unrollings) and Property Directed Reachability/IC3 (PDR) (local, without unrollings). IMC generates candidate invariants using interpolation over an unrolling of a system, without putting any restrictions on the SAT-solver’s search. PDR generates candidate invariants by a local search over a single instantiation of the transition relation, effectively guiding the SAT solver’s search. The two techniques are considered to be orthogonal and have different strength and limitations. In this paper, we present a new technique, called AVY, that effectively combines the key insights of the two approaches. Like \Imc, it uses unrollings and interpolants to construct an initial candidate invariant, and, like PDR, it uses local inductive generalization to keep the invariants in compact clausal form. On the one hand, AVY is an incremental IMC extended with a local search for CNF interpolants. On the other, it is PDR extended with a global search for bounded counterexamples. We implemented the technique using ABC and have evaluated it on the HWMCC’12 and HWMCC’13 benchmark-suites. Our results show that the prototype significantly outperforms PDR and McMillan’s interpolation algorithm (as implemented in ABC) on a large number of test cases.

video
Jesse Bingham and Joe Leslie-Hurd
Verifying Relative Error Bounds using Symbolic Simulation

In this paper we consider the problem of formally verifying hardware that is specified to compute reciprocal, reciprocal square root, and power-of-two functions on floating point numbers to within a given relative error. Such specifications differ from the common case in which any given input is specified to have exactly one correct output. Our approach is based on symbolic simulation with binary decision diagrams, and involves two distinct steps. First, we prove a lemma that reduces the relative error specification to several inequalities that involve reasoning about natural numbers only. The most complex of these inequalities asserts that the product of several naturals is less-than/greater-than another natural. Second, we invoke one of several customized algorithms that decides the inequality, without performing the expensive symbolic multiplications directly. We demonstrate the effectiveness of our approach on a next-generation Intel processor design and report encouraging time and space metrics for these proofs.

video
Milos Gligoric, Rupak Majumdar, Rohan Sharma, Lamyaa Eloussi and Darko Marinov
Regression Test Selection for Distributed Software Histories

Regression test selection techniques follow the software development process and pick the subset of only those tests whose behavior may be affected by the latest changes in the code. By focusing on a small subset of all tests, the testing process runs faster and, therefore, can be more tightly integrated into the development process. Existing algorithms for regression test selection assume a development model in which changes to the code happen in a linear sequence. These algorithms consider two versions of the software, related by a single change, at a time, and compute the set of tests affected by this change.

Unfortunately, modern software development processes using distributed version control systems, are more complex. Software version histories are more realistically modeled as directed graphs of versions; in addition to the previous case of a single change, two versions can be related by other commands available in a version control system, e.g., branch, merge, rebase, cherry-pick, etc. We describe the first regression test selection algorithm for software developed using modern distributed version control systems. By modeling different branch or merge operations directly in our algorithm, we can compute safe test sets that are orders of magnitude smaller than naive extensions of previous algorithms (e.g., that take the union of selected tests at every merge point).

We performed an extensive evaluation of test selection on existing software histories of large popular open-source projects. Our results are extremely positive: our framework obtained 5x speedup (in terms of the number of tests executed) over naive extensions of the existing algorithms.

video
Anton Wijs, Joost-Pieter Katoen and Dragan Bosnacki
GPU-Based Graph Decomposition into Strongly Connected and Maximal End Components

This paper presents parallel algorithms for component decomposition of graph structures on General Purpose Graphics Processing Units (GPUs). In particular, we consider the problem of decomposing sparse graphs into strongly connected components, and decomposing stochastic games (such as MDPs) into maximal end components. These problems are key ingredients of many (probabilistic) model-checking algorithms. We explain the main rationales behind our GPU-algorithms, and show a significant speed-up over the sequential counterparts in several case studies.

video
Dirk Beyer, Georg Dresler and Philipp Wendler
Software Verification in the Google App-Engine Cloud

Software verification often requires a large amount of computing resources. In the last years, cloud services emerged as an inexpensive, flexible, and energy-efficient source of computing power. We have investigated if such cloud resources can be used effectively for verification. We chose the platform-as-a-service offer Google App Engine and ported the open-source verification framework CPAchecker to it. We provide our new verification service as a web front-end for users who want to solve single verification tasks (tutorial usage), and an API for integrating the service into existing verification infrastructures (massively parallel bulk usage). We experimentally evaluate the effectiveness of this service and show that it can be successfully used to offload verification work to the cloud, considerably sparing local verification resources.

video
Roberto Cavada, Alessandro Cimatti, Michele Dorigatti, Alberto Griggio, Alessandro Mariotti, Andrea Micheli, Sergio Mover, Marco Roveri and Stefano Tonetta
The nuXmv Symbolic Model Checker

This paper describes the nuXmv symbolic model checker for finite- and infinite-state synchronous transition systems. nuXmv is the evolution of the NuSMV open source model checker. It builds on and extends NuSMV along two main directions. For finite-state systems it complements the basic verification techniques of NuSMV with state-of-the-art verification algorithms. For infinite-state systems, it extends the NuSMV language with new data types, namely Integers and Reals, and it provides advanced SMT-based model checking techniques.

Besides extended functionalities, nuXmv has been optimized in terms of performance to be competitive with the state of the art. nuXmv has been used in several industrial projects as verification back-end, and it is the basis for several extensions to cope with requirements analysis, contract based design, model checking of hybrid systems, safety assessment, and software model checking.

video

Session 129C: Biology and Hybrid Systems

Hillel Kugler, Nicola Paoletti, Boyan Yordanov, Youssef Hamadi and Christoph M. Wintersteiger
Analyzing and Synthesizing Genomic Logic Functions

Deciphering the developmental program of an embryo is a fundamental question in biology. Landmark papers by Peter et al. [Peter and Davidson, Nature, 2011; Peter, Faure and Davidson, PNAS, 2012] have recently shown how computational models of gene regulatory networks provide system-level causal understanding of the developmental processes of the sea urchin, and enable powerful predictive capabilities. A crucial aspect of their work is empirically deriving plausible models that explain all the known experimental data, a task that becomes infeasible in practice due to the inherent complexity of the biological systems. We present a generic SMT-based approach to analyse and synthesize real models. We apply our approach to the sea urchin embryo, and successfully improve the state-of-the-art by providing, for the first time, biologists with models that perfectly explains all known data. Furthermore, we show how infeasibility results on certain classes of models can be used to drive new biological insights.

video
Byron Cook, Jasmin Fisher, Benjamin Hall, Samin Ishtiaq, Garvit Juniwal and Nir Piterman
Finding instability in biological models

The stability of biological models is an important test for establishing their soundness and accuracy. Stability in biological systems represents the ability of a robust system to always return to homeostasis. In recent work, modular approaches for proving stability have been found to be swift and scalable. If stability is however not proved, the currently available techniques apply an exhaustive search through the unstable state space to find loops. This search is frequently prohibitively computationally expensive, limiting its usefulness. Here we present a new modular approach eliminating the need for an exhaustive search for loops. Using models of biological systems we show that the technique finds loops significantly faster than brute force approaches. Furthermore, for a subset of stable systems which are resistant to modular proofs, we observe a speed up of up to 3 orders of magnitude as the exhaustive searches for loops which cause instability are avoided. With our new procedure we are able to prove instability and stability in a number of realistic biological models, including adaptation in bacterial chemotaxis, the lambda phage lysogeny/lysis switch, voltage gated channel opening and cAMP oscillations in the slime mold Dictyostelium discoideum. This new approach will support the development of new clinically relevant tools for industrial biomedicine.

video
Zhenqi Huang, Chuchu Fan, Alexandru Mereacre, Sayan Mitra and Marta Kwiatkowska
Invariant verification of nonlinear hybrid automata networks of cardiac cells

Verification algorithms for networks of nonlinear hybrid automata (HA) can aid our understanding and our ability to control biological processes such as cardiac arrhythmia, memory formation, and genetic regulation. We present an algorithm for over-approximating reach sets of networks of nonlinear HA which can be used for sound and relatively complete invariant checking. It uses automatically computed input-to-state discrepancy functions for the individual automata modules in the network A for constructing a low-dimensional model M. Simulations of both A and M are then used to compute the reach tubes for A. These techniques enable us to handle a challenging verification problem involving a network of cardiac cells where each cell has four continuous variables and 29 locations. Our prototype tool can check bounded-time invariants for networks with $5$ cells (20 continuous variables, 29^5 locations) typically in less than 15 minutes for upto reasonable time horizons. From the computed reach tubes we can infer biologically relevant properties of the network from a set of initial states.

video
Henri Hansen, Shang-Wei Lin, Yang Liu, Truong Khanh Nguyen and Jun Sun
Diamonds are a Girl’s Best Friend: Partial Order Reduction for Timed Automata With Abstractions

A major obstacle for using partial order reduction in the context of real time verification is that the presence of clocks and clock constraints breaks the usual \emph{diamond structure} of otherwise independent transitions. This is especially true when information of the relative values of clocks is preserved in the form of diagonal constraints. However, when diagonal constraints are relaxed by a suitable abstraction, some diamond structure is preserved in the zone graph. In this article, we introduce a variant of the stubborn set method for reducing an abstracted zone graph. Our method works with all abstractions, but are especially suited to to situations where one abstract execution can simulate several permutations of the corresponding concrete execution, even though it might not be able to simulate the permutations of the abstract execution. We define independence relations with respect to this “hidden” diamond structure, and define stubborn sets using these relations. We provide a reference implementation for verifying timed language inclusion, to demonstrate the effectiveness of our method.

video

Session 130C: Hybrid Systems

Willem Hagemann
Reachability Analysis of Hybrid Systems Using Symbolic Orthogonal Projections

This paper deals with reachability analysis of hybrid systems with continuous dynamics described by linear differential inclusions and arbitrary linear maps for discrete updates. The invariants, guards, and sets of reachable states are given as convex polyhedra. Our reachability algorithm is based on a novel representation class for convex polyhedra, the symbolic orthogonal projections (sops), on which various geometric operations, including convex hulls, Minkowski sums, linear maps, and intersections, can be performed efficiently and exactly. The capability to represent intersections of convex polyhedra exactly is superior to support function-based approaches like the LGG-algorithm of Le Guernic and Girard.

Accompanied by some simple examples, we address the problem of the monotonic growth of the exact representation and propose a combination of our reachability algorithm with the LGG-algorithm. This results in an efficient method of better accuracy than the LGG-algorithm and its productive implementation in SpaceEx.

video
Alessandro Cimatti, Alberto Griggio, Sergio Mover and Stefano Tonetta
Verifying LTL properties of hybrid systems with K-liveness

The verification of liveness properties is an important challenge in the design of real-time and hybrid systems.

In contrast to the verification of safety properties, for which there are several solutions available, there are really few tools that support liveness properties such as general LTL formulas for real-time systems, even in the case of timed automata.

In the context of finite-state model checking, K-Liveness is a recently proposed algorithm that tackles the problem by proving that an accepting condition can be visited at most K times. K-Liveness has shown to be very efficient, thanks also to its tight integration with IC3, probably the current most effective technique for safety verification. Unfortunately, the approach is neither complete nor effective (even for simple properties) in the case of infinite-state systems with continuous time.

In this paper, we extend K-Liveness to deal with LTL for hybrid systems. On the theoretical side, we show how to extend the reduction from LTL to the reachability of an accepting condition in order to make the algorithm work with continuous time. In particular, we prove that the new reduction is complete for rectangular hybrid automata, in the sense that the LTL property holds if and only if there exists K such that the accepting condition is visited at most K times. On the practical side, we present an efficient integration of K-Liveness in an SMT-version of IC3, and demonstrate its effectiveness on several benchmarks.

video

Session 138C: Games and Synthesis

Rodica Bozianu, Catalin Dima and Emmanuel Filiot
Safraless Synthesis for Epistemic Temporal Specifications

In this paper we address the synthesis problem for specifications given in linear temporal single-agent epistemic logic, KLTL (or KL_1), over single-agent systems having imperfect information of the system state. [Vardi and Van der Meyden, CCONCUR’98] have shown that this problem is 2Exptime complete. However, their procedure relies on complex automata constructions that are notoriously resistant to efficient implementations as they use Safra-like determinization.

We propose a “Safraless” synthesis procedure for a large fragment of KLTL. The construction transforms first the synthesis problem into the problem of checking emptiness for universal co-B\”{u}chi tree automata using an information-set construction. Then we build a safety game that can be solved using an antichain-based symbolic technique exploiting the structure of the underlying automata. The technique is implemented and applied to a couple of case studies.

video
Tomas Brazdil, David Klaska, Antonin Kucera and Petr Novotný
Minimizing Running Costs in Consumption Systems

A standard approach to optimizing long-run running costs of discrete systems is based on minimizing the mean-payoff, i.e., the long-run average amount of resources (“energy”) per transition. However, this approach inherently assumes that the energy source has an unbounded capacity, which is not always realistic. For example, an autonomous robotic device has a battery of finite capacity that has to be recharged periodically, and the total amount of energy consumed between two successive charging cycles is bounded by the capacity. Hence, a controller minimizing the mean-payoff must obey this restriction. In this paper we study the controller synthesis problem for consumption systems with a finite battery capacity, where the task of the controller is to minimize the mean-payoff while preserving the functionality of the system encoded by a given linear-time property. We show that an optimal controller always exists, and it may either need only finite memory or require infinite memory (it is decidable in polynomial time which of the two cases holds). Further, we show how to compute an effective description of an optimal controller in polynomial time. Finally, we consider the limit values achievable by larger and larger battery capacity, show that these values are computable in polynomial time, and we also analyze the corresponding rate of convergence. To the best of our knowledge, these are the first results about optimizing the long-run running costs in systems with bounded energy stores.

video
Krishnendu Chatterjee, Martin Chmelík and Przemyslaw Daca
CEGAR for Qualitative Analysis of Probabilistic Systems

We consider Markov decision processes (MDPs) which are a standard model for probabilistic systems. We focus on qualitative properties for MDPs that can express that desired behaviors of the system arise almost-surely (with probability 1) or with positive probability. We introduce a new simulation relation to capture the refinement relation of MDPs with respect to qualitative properties, and present discrete graph theoretic algorithms with quadratic complexity to compute the simulation relation. We present an automated technique for assume-guarantee style reasoning for compositional analysis of MDPs with qualitative properties by giving a counter-example guided abstraction-refinement approach to compute our new simulation relation. We have implemented our algorithms and show that the compositional analysis leads to significant improvements.

video
Thomas Dillig, Isil Dillig and Swarat Chaudhuri
Optimal Guard Synthesis for Memory Safety

This paper presents a new synthesis-based approach for helping programmers write low-level memory-safe code. Specifically, given a partial program with missing guards, our algorithm synthesizes concrete predicates to plug in for the missing guards such that all buffer accesses in the program are guaranteed to be memory safe. Furthermore, our algorithm gives a guarantee of optimality in that the synthesized guards are provably the simplest (that is, with the fewest number of variables) and weakest among guards that guarantee memory safety. Our approach is fully automatic and does not require any hints, such as templates, from the user. We have implemented our proposed algorithm in a prototype synthesis tool for C programs. Our experimental evaluation on a set of real-world C programs shows that the proposed approach is able to successfully synthesize guards that closely match hand-written programmer code.

video
Jade Alglave, Daniel Kroening, Vincent Nimal and Daniel Poetzl
Don’t sit on the fence: A static analysis approach to automatic fence insertion

Modern architectures rely on memory fences to prevent undesired weakenings of memory consistency. As the fences’ semantics may be subtle, the automation of their placement is highly desirable. But precise methods restoring consistency do not scale to deployed systems code. We choose to trade some precision for genuine scalability: we present a novel technique suitable for large code bases. We implement this method in our new musketeer tool, and detail experiments on more than 350 executables of packages found in a Debian Linux distribution, e.g. memcached (about 10000 LoC).

video
Petr Čermák, Alessio Lomuscio, Fabio Mogavero and Aniello Murano
MCMAS-SLK: A Model Checker for the Verification of Strategy Logic Specifications

We introduce MCMAS-SLK, a BDD-based model checker for the verification of systems against specifications expressed in a novel, epistemic variant of strategy logic. We give syntax and semantics of the specification language and a introduce a labelling algorithm for epistemic and strategy logic modalities. We provide details of the checker which can also be used for synthesising the agents’ strategies so that a specification is satisfied by the system. We evaluate the efficiency of the implementation by discussing the results obtained for a scheduling system and the dining cryptographers protocol.

video
Nina Narodytska, Alexander Legg, Fahiem Bacchus, Leonid Ryzhyk and Adam Walker
Solving Games without Controllable Predecessor

Two-player games are a useful formalism for the synthesis of reactive systems. The traditional approach to solving such games iteratively computes the set of winning states for one of the players. This requires keeping track of all discovered winning states and can lead to space explosion even when using efficient symbolic representations. We propose a new method for solving reachability games. Our method works by exploring a subset of the possible concrete runs of the game and proving that these runs can be generalised into a winning strategy on behalf of one of the players. We use counterexample-guided backtracking search to identify a subset of runs that are sufficient to consider to solve the game. We evaluate our algorithm on several families of benchmarks derived from real-world device driver synthesis problems.

video
Chih-Hong Cheng, Chung-Hao Huang, Harald Ruess and Stefan Stattelmann
G4LTL-ST: Automatic Generation of PLC Programs

G4LTL-ST automatically generates IEC 61131-3-compatible control code for industrial field-level devices from behavioral, timed specifications of input-output signals. Behavioral specifications are expressed in a linear temporal logic (LTL) extended language with (1) arithmetic constraints for describing data-intensive control behavior and (2) timers which are based on the industry standard IEC 61131-3. The code generated by G4LTL-ST is in the format of Structured Text which can readily be compiled into executable code for a large number of industrial field-level devices. The synthesis algorithm of G4LTL-ST implements a pseudo-Boolean abstraction of data constraints and a compilation of timer specifications into LTL, together with a CEGAR-like abstraction-refinement loop. Since temporal logic specifications are notoriously difficult to use in practice, G4LTL-ST includes a number of heuristics for suggesting additional environment assumptions for making the control problem at hand realizable.

video

Session 140C: Concurrency

Mohsen Lesani, Todd Millstein and Jens Palsberg
Automatic Atomicity Verification for Clients of Concurrent Data Structures

Mainstream programming languages offer libraries of concurrent data structures. Each method call on a concurrent data structure appears to take effect atomically. However, clients of such data structures often require stronger guarantees. For instance, a histogram class that is implemented using a concurrent map may require a method to atomically increment a histogram bar, but its implementation requires multiple calls to the map and hence is not atomic by default. Indeed, prior work has shown that atomicity errors in clients of concurrent data structures occur frequently in production code. We present an automatic and modular verification technique for clients of concurrent data structures. We define a novel sufficient condition for atomicity of clients called condensability. We present a tool called Snowflake that generates proof obligations for condensability of Java client methods and discharges them using an off-the-shelf SMT solver. We applied Snowflake to an existing suite of client methods from several open-source applications. It successfully verified 76.9% of the atomic methods without any change and verified the rest of them with small code refactoring. It rejected all non-atomic methods.

video
Pavol Cerny, Thomas Henzinger, Arjun Radhakrishna, Leonid Ryzhyk and Thorsten Tarrach
Regression-free Synthesis for Concurrency

While fixing concurrency bugs, program repair algorithms may introduce new concurrency bugs. We present a new repair algorithm that avoids such regressions. The solution space is given by the program transformations we consider in the repair process. These include reordering of instructions within a thread, inserting atomic sections, and inserting wait-notify mechanisms. The new algorithm, PACES, is an extension of the CEGIS loop. It learns constraints on the space of candidate solutions, from both positive examples (error-free traces) and counterexamples (error traces). From counterexamples, the algorithm learns a constraint necessary to remove them. From positive examples, it learns constraints that are necessary in order to prevent the repair from turning the trace into a counterexample. We implemented the algorithm and evaluated it on simplified Linux device drivers with known bugs. The tool is able to fix the bugs while avoiding regressions.

video
Omar Inverso, Ermenegildo Tomasco, Bernd Fischer, Salvatore La Torre and Gennaro Parlato
Bounded Model Checking of Multi-Threaded C Programs via Lazy Sequentialization

Bounded model checking (BMC) has successfully been used for many practical program verification problems, but concurrency still poses a challenge. Here we describe a new approach to BMC of sequentially consistent C programs using POSIX threads. Our approach first translates a multi-threaded C program into a non-deterministic sequential C program that preserves reachability for all round-robin schedules with a given bound on the number of rounds. It then re-uses existing high-performance BMC tools as back-ends for the sequential verification problem. Our translation is carefully designed to introduce very small memory overheads and very few sources of non-determininsm, so that it produces tight SAT/SMT formulae, and is thus very effective in practice: our implementation won the the concurrency category of SV-COMP14. It solved all verification tasks successfully and was 30x faster than the best tool with native concurrency handling.

video
Javier Esparza, Ruslán Ledesma-Garza, Rupak Majumdar, Philipp Meyer and Filip Niksic
An SMT-based Approach to Coverability Analysis

Model checkers based on Petri net coverability have been used successfully in recent years to verify safety properties of concurrent shared-memory or asynchronous message-passing software. We revisit a constraint approach to the problem based on classical Petri net analysis techniques. We show how to utilize an SMT solver to implement the constraint approach, and additionally, to generate a minimized inductive invariant from a safety proof. We empirically evaluate our procedure on a large set of existing Petri net benchmarks. Even though our technique is incomplete, it can quickly discharge all safe instances. Additionally, the inductive invariants computed are usually orders of magnitude smaller than those produced by existing solvers.

video
Alejandro Sanchez and Cesar Sanchez
LEAP: A Tool for the Parametrized Verification of Concurrent Datatypes

This paper presents Leap, a tool for the verification of concurrent datatypes and parametrized systems composed by an unbounded number of threads that manipulate infinite data.

Concurrent datatypes are accessed by numerous threads which manipulate shared data in the heap using liberal patterns of synchronization like lock-freedom and fine-grain locking. Formally verifying concurrent datatypes is a challenging task, because of the combination of complex dynamic memory manipulation, and the rather unstructured concurrent execution. The difficulty is increased because one must also deal with concurrent system composed by the execution of an unbounded number of threads.

Leap receives as input a concurrent program description and a specification. Leap generates automatically a finite collection of verification conditions and discharges these to specialized decision procedures built on top of state-of-the-art SMT solvers. The validity of all verification conditions implies that the program executed by any number of threads satisfies the specification. Leap does not only include decision procedures for integers and booleans, but also implements specific theories for heap memory layouts such as linked-lists and skiplist.

video

Session 149C: SAT/SMT/QBF

Margus Veanes, Nikolaj Bjorner, Lev Nachmanson and Sergey Bereg
Monadic Decomposition

Monadic predicates play a prominent role in many decidable cases, including decision procedures for symbolic automata. We are here interested in discovering whether a formula can be rewritten into a Boolean combination of monadic predicates. Our setting is quantifier free formulas over a decidable background theory, such as linear arithmetic, and we here develop a semi-decision procedure for extracting a monadic decomposition of a formula when it exists.

video
Tianyi Liang, Andrew Reynolds, Cesare Tinelli, Clark Barrett and Morgan Deters
A DPLL(T) Theory Solver for a Theory of Strings and Regular Expressions

An increasing number of applications in verification and security rely or could benefit from automatic solvers that can check the satisfiability of constraints over a rich set of data types that includes character strings. Unfortunately, most string solvers today are standalone tools that can reason only about (some fragment) of the theory of strings and regular expressions, sometimes with strong restrictions on the expressiveness of their input language such as, for instance, length bounds on all string variables. These solvers are based on reductions to satisfiability problems over other data types, such as bit vectors, or to decision problems over automata. In this paper, we present a set of algebraic techniques for solving constraints over the theory of (unbounded) strings natively, without reduction to other problems. Our techniques can be used to construct string theory solvers that can be integrated into general, multi-theory SMT solvers based on the DPLL(T) architecture. We have implemented these techniques in our SMT solver CVC4 to expand its set of built-in data types to strings with concatenation, length, and membership in regular languages. This implementation makes CVC4 the first solver able to accept a rich set of mixed constraints over strings, integers, reals, arrays and algebraic data types. Our initial experimental results show that, in addition, over pure string problems CVC4 is highly competitive in a number ways with specialized string solvers.

video
Alexander Nadel
Bit-Vector Rewriting with Automatic Rule Generation

Rewriting is essential for efficient bit-vector SMT solving. The rewriting algorithm commonly used by modern SMT solvers iteratively applies a set of ad hoc rewriting rules hard-coded into the solver to simplify the given formula at the preprocessing stage. This paper proposes a new approach to rewriting. In our approach, the solver starts each invocation with an empty set of rewriting rules. The set is extended by applying at run-time an automatic SAT-based algorithm for new rewriting rule generation. The set of rules differs from instance to instance. We implemented our approach in the framework of an algorithm for equivalence and constant propagation, called 0-saturation, which we extended from purely propositional reasoning to bit-vector reasoning. Our approach results in a substantial performance improvement in a state-of-the-art SMT solver over various SMT-LIB families.

video
Liana Hadarean, Clark Barrett, Dejan Jovanović, Cesare Tinelli and Kshitij Bansal
A Tale of Two Solver: Eager and Lazy Approaches to Bit-vectors

The standard method for deciding bit-vector constraints is via eager reduction to propositional logic. This is usually done after first applying powerful rewrite techniques. While often efficient in practice, this method does not scale on problems for which top-level rewrites cannot reduce the problem size sufficiently. A lazy solver can target such problems by doing many satisfiability checks, each of which only reasons about a small subset of the problem. In addition, the lazy approach enables a wide range of optimization techniques that are not available to the eager approach. In this paper we describe the architecture and features of our lazy solver. We provide a comparative analysis of the eager and lazy approaches, and show how they are complementary in terms of the types of problems they can efficiently solve. For this reason, we propose a portfolio approach that runs a lazy and eager solver in parallel. Our empirical evaluation shows that the lazy solver can solve problems none of the eager solvers can and that the portfolio solver outperforms other solvers both in terms of total number of problems solved and the time taken to solve them.

video
Andrei Voronkov
AVATAR: The New Architecture for First-Order Theorem Provers

This paper describes a new architecture for first-order resolution and superposition theorem provers called AVATAR (Advanced Vampire Architecture for Theories and Resolution). Its original motivation comes from the problem well-studied in the past — dealing with problems having clauses containing propositional variables and other clauses that can be split into components with disjoint sets of variables. Such clauses are common for problems coming from applications, for example in verification and program analysis, where many ground literals occur in the problems and even more are generated during the proof-search.

This problem was previously studied by adding various versions of splitting. The addition of splitting resulted in considerable improvements in performance of theorem provers. However, even with various versions of splitting implemented, the performance of superposition theorem provers is nowhere near SMT solvers on purely ground problems or SAT solvers on propositional problems.

This paper describes a new, revolutionary architecture for superposition theorem provers, where a superposition theorem prover is tightly integrated with a SAT or an SMT solver. Its implementation in our theorem prover Vampire resulted in drastic improvements over all previous implementation of splitting. Over four hundred TPTP problems previously unsolvable by any modern prover, including Vampire itself, have been proved, most of them with short runtimes. Many problems solved with one of 481 variants of splitting previously implemented in Vampire can also be solved with AVATAR.

We also believe that AVATAR is ideally suited for applications where reasoning with both quantifiers and theories is required.

video
Ruzica Piskac, Thomas Wies and Damien Zufferey
Automating Separation Logic with Trees and Data

Separation logic (SL) is a widely used formalism for verifying heap manipulating programs. Existing SL solvers focus on decidable fragments for list-like structures. More complex data structures such as trees are typically unsupported in implementations, or handled by incomplete heuristics. While complete decision procedures for reasoning about trees have been proposed, these procedures suffer from high complexity, or make global assumptions about the heap that contradict the separation logic philosophy of local reasoning. In this paper, we present a fragment of classical first-order logic for local reasoning about tree-like data structures. The logic is decidable in NP and the decision procedure allows for combinations with other decidable first-order theories for reasoning about data. Such extensions are essential for proving functional correctness properties. We have implemented our decision procedure and, building on earlier work on translating SL proof obligations into classical logic, integrated it into an SL-based verification tool. We successfully used the tool to verify functional correctness of tree-based data structure implementations.

video
Ashish Tiwari and Patrick Lincoln
A Nonlinear Real Arithmetic Fragment

We present a new procedure for testing satisfiability (over the reals) of a conjunction of polynomial equations. There are three possible return values for our procedure: it either returns a model for the input formula, or it says that the input is unsatisfiable, or it fails because the applicability condition for the procedure, called the eigen-condition, is violated. For the class of constraints where the eigen-condition holds, our procedure is a decision procedure. The design of the procedure is motivated by the problems arising from exists-forall problems — generated by template-based verification and synthesis methods — that are translated into exists problems using the Positivstellensatz. Our procedure can be seen as a generalized SAT solver: in particular, the eigen-condition holds on nonlinear real arithmetic encodings of SAT problems. We experimentally evaluate the procedure and describe transformations that can potentially turn a problem into another one where the eigen-condition holds.

video
Bruno Dutertre
Yices 2.2

Yices is an efficient SMT solver developed by SRI International. The first version of Yices, was released in 2006 and has been continuously updated since then. In 2009, we started a complete re-implementation of the Yices solver intended to achieve grater performance, modularity, and flexibility. We describe the latest release of Yices, namely Yices 2.2. We present the tool’s architecture and algorithms it implements, and latest developements such as support for the SMT2 notation and various performance improvements. Like its predecessors, Yices 2.2 is distributed free-of-charge for non-commercial use.

video

Session 151C: Bounds and Termination

Unfortunately only the first talk was properly recorded (technical problems)
Moritz Sinn, Florian Zuleger and Helmut Veith
A Simple and Scalable Static Analysis for Bound Analysis and Amortized Complexity Analysis

We present the first scalable bound analysis that achieves amortized complexity analysis. In contrast to earlier work, our bound analysis is not based on general purpose reasoners such as abstract interpreters, software model checkers or computer algebra tools. Rather, we derive bounds directly from abstract program models, which we obtain from programs by comparatively simple invariant generation and symbolic execution techniques. As a result, we obtain an analysis that is more predictable and more scalable than earlier approaches. Our experiments demonstrate that our analysis is fast and at the same time able to compute bounds for challenging loops in a large real-world benchmark.

Technically, our approach is based on lossy vector addition systems (VASS). Our bound analysis first computes a lexicographic ranking function that proves the termination of a VASS, and then derives a bound from this ranking function. Our methodology achieves amortized analysis based on a new insight how lexicographic ranking functions can be used for bound analysis.

video
Ravichandhran Madhavan and Viktor Kuncak
Symbolic Resource Bound Inference for Functional Programs

We present an approach for inferring symbolic resource bounds for purely functional programs consisting of recursive functions, algebraic data types and nonlinear arithmetic operations. In our approach, the developer specifies the desired shape of the bound as a program expression containing numerical holes which we refer to as templates. For e.g, time ≤ a ∗ size(tree) + b where a, b are unknowns, is a template that specifies a bound on the execution time. We present a scalable algorithm for computing strongest bounds for sequential and parallel execution times by solving for the unknowns in the template. We empirically evaluate our approach on several benchmarks that manipulate complex data-structures such as binomial heap, lefitist heap, red-black tree and AVL tree. Our implementation is able to infer hard, nonlinear symbolic time bounds for our benchmarks that are beyond the capability of the existing approaches.

Daniel Larraz, Kaustubh Nimkar, Albert Oliveras, Enric Rodríguez-Carbonell and Albert Rubio
Proving Non-termination Using Max-SMT

We show how Max-SMT-based invariant generation can be exploited for proving non-termination of programs. The construction of the proof of non-termination is guided by the generation of quasi-invariants — properties such that if they hold at a location during execution once, then they will continue to hold at that location from then onwards. The check that quasi-invariants can indeed be reached is then performed separately. Our technique considers strongly connected subgraphs of a program’s control flow graph for analysis and thus produces more generic witnesses of non-termination than existing methods. Moreover, it can handle programs with unbounded non-determinism and is more likely to converge than previous approaches.

Matthias Heizmann, Jochen Hoenicke and Andreas Podelski
Termination Analysis by Learning Terminating Programs

We present a novel approach to termination analysis. In a first step, the analysis uses a program as a black-box which exhibits only a finite set of sample traces. Each sample trace is infinite but can be represented by a finite lasso. The analysis can “learn” a program from a termination proof for the lasso, a program that is terminating by construction. In a second step, the analysis checks that the set of sample traces is representative in a sense that we can make formal. An experimental evaluation indicates that the approach is a potentially useful addition to the portfolio of existing approaches to termination analysis.

Andrey Kupriyanov and Bernd Finkbeiner
Causal Termination of Multi-threaded Programs

We present a new model checking procedure for the termination analysis of multi-threaded programs. Current termination provers scale badly in the number of threads; our new approach easily handles 100 threads on multi-threaded benchmarks like Producer-Consumer. In our procedure, we characterize the existence of non-terminating executions as Mazurkiewicz-style concurrent traces and apply causality-based transformation rules to refine them until a contradiction can be shown. The termination proof is organized into a tableau, where the case splits represent a novel type of modular reasoning according to different causal explanations of a hypothetical error. We report on experimental results obtained with a tool implementation of the new procedure, called Arctor, on previously intractable multi-threaded benchmarks.

Session 154: Abstraction

Johannes Birgmeier, Aaron R. Bradley and Georg Weissenbacher
Counterexample to Induction-Guided Abstraction-Refinement (CTIGAR)

Typical CEGAR-based verification methods refine the abstraction domain based on full traces. The finite-state model checking algorithm IC3 introduced the concept of discovering, generalizing from, and thereby eliminating individual state (or cube) counterexamples to induction (CTIs). This focus on individual states (or cubes) suggests a simpler abstraction-refinement scheme in which refinements are performed relative to single steps of the transition relation, thus reducing the expense of a refinement and eliminating the need for full traces. Interestingly, this change in refinement focus leads to a natural spectrum of refinement options, including when to refine and which type of concrete single-step query to refine relative to. Experiments validate that CTI-focused abstraction refinement, or CTIGAR, is competitive with existing CEGAR-based tools.

video
Suho Lee and Karem A. Sakallah
Unbounded Scalable Verification Based on Approximate Property-Directed Reachability and Datapath Abstraction

This paper introduces the Averroes formal verification system which exploits the power of two complementary approaches: counterexample-guided abstraction and refinement (CEGAR) of the design’s datapath and the recently-introduced IC3 and PDR approximate reachability algorithms. Averroes is particularly suited to the class of hardware designs consisting of wide datapaths and complex control logic, a class that covers a wide spectrum of design styles that range from general-purpose microprocessors to special-purpose embedded controllers and accelerators. In most of these designs, the number of datapath state variables is orders of magnitude larger than the number of control state variables. Thus, for purposes of verifying the correctness of the control logic (where most design errors typically reside), datapath abstraction is particularly effective at pruning away most of a design’s state space leaving a much reduced “control space” that can be efficiently explored by the IC3 and PDR method. Preliminary experimental results on a suite of industrial benchmarks show that Averroes significantly outperforms verification at the bit level. To our knowledge, this is the first empirical demonstration of the possibility of automatic scalable unbounded sequential verification.

video
Arlen Cox, Bor-Yuh Evan Chang and Sriram Sankaranarayanan
QUICr: A Reusable Library for Parametric Abstraction of Sets and Numbers

This paper introduces QUICr, a library that implements abstract domains for simultaneous reasoning about numbers and sets of numbers. QUICr is an abstract domain combinator that lifts any domain for numbers into an efficient domain for numbers and sets of numbers. As a library, it is useful for inferring relational data invariants in programs that manipulate data structures. As a testbed, it allows easy extension of widening and join heuristics, enabling adaptations to new and varied applications. In this paper we present the architecture of the library, guidelines on how to select heuristics, and an example instantiation of the library using the NewPolka polyhedra library to verify set-manipulating programs.

video