Artifact Evaluation

The purpose of the evaluation is to provide a service by the community to help authors provide more substantial supplements to their papers so future researchers can more effectively build on and compare with previous work. The Artifact Evaluation Committee (AEC) will read the paper and explore the artifact to give the authors third-party feedback about how well the artifact supports the paper and how easy it is for future researchers to use the artifact. 

We will be attributing 3 badges, according to ACM’s artifact guidelines:

  1. Artifact functional (documented, consistent, complete, exercisable)
  2. Artifact available (artifacts have been placed on a publicly accessible archival repository)
  3. Artifact reusable (artifacts are of a quality that significantly exceeds minimal functionality; see below)

Authors of accepted regular papers: you will be invited (but nor required) to submit the relevant artifact for evaluation by the artifact evaluation committee (AEC).

Authors of accepted tool papers: you will be required to submit your artifact to the AEC at the functional level.  Acceptance of tool papers will be conditional on successful artifact evaluation (unlike CAV 2020, authors of tool papers will not be required to submit an artifact along with the initial paper submission).

Members of the artifact evaluation committee and the program committee are asked to use submitted artifacts for the sole purpose of evaluating the contribution associated with the artifact.

Note: artifact evaluation will be single-blind, but paper reviews are double-blind: do not add identifying information to your paper submission.

Submission Guidelines

Submission site: https://easychair.org/conferences/?conf=cav2021.

Please include the following elements:

  • Accepted paper (due April 24th)
  • Artifact abstract (due April 24th, using the Abstract text field on EasyChair) including:
    • Original paper abstract
    • Short description of artifact to facilitate bidding
      For example: “SMT-based program verifier for loop-free code”, “machine learning for invariant inference”, “formalization of XYZ theorem in Isabelle ITP”, “novel language for SW/HW codesign”, …
    • Description of any unusual requirements
      For example: cloud-computing resources, hardware, ….
    • Which badge or badges you are applying for (functional, available, reusable)
    • Link to artifact on personal website or repository (due April 28th)

Packaging Guidelines

To submit an artifact, please prepare a virtual machine or Docker image of your artifact and keep it accessible through an HTTP link throughout the evaluation process.

As the basis of the VM image, please choose commonly used OS versions that have been tested with the virtual machine software and that evaluators are likely to be accustomed to. We encourage you to use https://www.virtualbox.org and save the VM image as an Open Virtual Appliance (OVA) file.

Please include the prepared link in the appropriate field of the artifact submission form, along with a brief description of how to boot the VM or container (including system requirements and passwords if any).  Your VM or container should include a separate README describing how to evaluate your artifact.

Submitting your artifact

  1. Include your tool in the virtual machine or Docker image: create in the home directory a folder with the following items: 
    1. Your accepted paper 
    2. A detailed README file of how to run your tool.
    3. A directory containing the artifact (benchmarks + tool or proof scripts)
  2. Submit to EasyChair. Please use the same title as for the CAV submission.

If you are not in a position to prepare the artifact as above, please contact PC chairs for an alternative arrangement. For instance, if you cannot provide us with a VM that contains licensed software, e.g., Matlab, please contact us, so we can find a solution.

Packaging steps for the “functional” badge

  • Document in detail how to reproduce the experimental results of the paper using the artifact; keep this process simple through easy-to-use scripts and provide detailed documentation assuming minimum expertise.
  • Ensure the artifact is in the state ready to run. It should work without a network connection. It should not require the user to install additional software before running, that is, all required packages should be installed on the provided virtual machine.
  • The artifact should use reasonably modest resources (RAM, number of cores), so that the results can be reproduced on various hardware platforms including laptops. The evaluation should take a reasonable amount of time to complete (no more than 3 to 5 hours). If this is not the case for your benchmarks, make sure to also include simpler benchmarks that do not require a long running time. If this is not possible, please contact PC and AEC chairs as soon as possible.
  • When possible include source code within your virtual machine image and point to the most relevant and interesting parts of the source code tree.
  • We encourage the authors to include log files that were produced by their tools, and point to the relevant log files in the artifact description.

Additional steps for the “available” badge

To get the available badge, please upload your VM to a permanent repository, such as Zenodo, figshare, or Dryad and use that link in your artifact submission.

Additional steps for the “reusable” badge

Reusable artifacts need to clear a significantly higher bar than functional; additionally, it might not be applicable to every tool.  To get the reusable badge, please follow these additional steps:

  • Ensure that your tool is usable independently of your artifact VM or container by making your source code and a set of representative experiments available on a public platform (personal website, code-hosting platform…), under a license that allows reuse.  Your source code release should include detailed documentation (setup and usage instructions).
  • Ensure that the set-up process for your artifact is reproducible by including a script used to build the VM or container (Vagrantfile, Docker script, Bash script) that allows users to automatically reproduce your artifact’s set-up.  Try to keep this script reasonably simple.  
  • Include instructions for reviewers explaining how to exercise your artifact on new inputs; in particular, document what inputs your tool support.  When applicable, make sure that your tool accepts inputs in standard formats (e.g. SMTLIB).
  • Ensure that your tool is reusable on inputs beyond those included in the paper by describing one experiment beyond those in the paper that a reviewer might run (a new program to feed to your compiler, a new problem for your SMT solver, etc.)

Timeline

For provisionally accepted tool or regular papers, artifact evaluation will run in two phases: 

Phase 1. Kick-the-tires phase (~1 week): reviewers will make sure that the artifact runs and report potential issues back to the authors.

Phase 2. Evaluation (~3 weeks): reviewers will evaluate artifacts.

Artifact badges

If your paper was granted artifact evaluation badges, download the badges and extract them to your paper’s folder, include the snippet below in your preamble and uncomment the appropriate line:

\usepackage[firstpage]{draftwatermark}

% functional
%\SetWatermarkText{\hspace*{5in}\raisebox{5.in}{\includegraphics[scale=0.8]{badges/aec-badge_f}}}

% available
% \SetWatermarkText{\hspace*{5in}\raisebox{5.in}{\includegraphics[scale=0.8]{badges/aec-badge_a}}}

% functional + available
% \SetWatermarkText{\hspace*{5in}\raisebox{5.in}{\includegraphics[scale=0.8]{badges/aec-badge_f_a}}}

% functional + reusable
% \SetWatermarkText{\hspace*{5in}\raisebox{5.in}{\includegraphics[scale=0.8]{badges/aec-badge_f_r}}}

% functional + available + reusable
% \SetWatermarkText{\hspace*{5in}\raisebox{5.in}{\includegraphics[scale=0.8]{badges/aec-badge_f_a_r}}}

\SetWatermarkAngle{0}

Here are some pre-rendered examples:

NB: If you use the document class option final you will need to use \usepackage[firstpage, stamp=true]{draftwatermark}.