Journal Special Issue on Fuzzing: What about Preregistration?

Co-authored with Marcel Böhme (Monash University), ‪László Szekeres (Google) and Baishakhi Ray (Columbia University) We think that the incentive structure for fuzzing research is broken; so we would like to introduce preregistration to fix this. Preregistration is a publication model whereby a submitted article is primarily evaluated based on (i) the significance and novelty of the hypotheses or techniques, and (ii) the soundness and reproducibility of the methodology specified to validate the claims or hypotheses. The actual evaluation or experimentation (apart from some supporting preliminary results) is conducted only after the paper has been in-principle accepted. The final acceptance will depend only on the methodology that was ultimately followed, not the final results. ... the full blog post can be read on the FuzzBench blog

Measuring the coverage achieved by symbolic execution

Cristian Cadar and Timotej Kapus This blog post is meant for symbolic execution users and developers who are interested in measuring the coverage achieved by a symbolic execution tool as a proxy for the effectiveness of a symbolic execution campaign.  We use KLEE as an example tool, but this discussion is applicable to other implementations as well.  We also use statement coverage in our discussion, but other forms of coverage raise similar issues.  Suppose you run KLEE on a program for a few hours and you want to measure the coverage achieved by KLEE in this time.  There are two main ways to achieve this: rely on the coverage reported by KLEE or measure the coverage externally using a tool such as GCov.  We describe each in turn and then discuss how they differ from one another and when each of them should be used. Coveraged measured internally ("internal coverage") The first option is to simply rely on the coverage reported by KLEE.  A tool like KLEE considers a statement