FixReverter: A Realistic Bug Injection Methodology for Benchmarking Fuzz Testing. Zenong Zhang, Zach Patterson, Michael Hicks, and Shiyi Wei. In Proceedings of the USENIX Security Symposium (USENIX SEC), August 2022. Distinguished Paper.

Fuzz testing is an active area of research with proposed improvements published at a rapid pace. Such proposals are assessed empirically: Can they be shown to perform better than the status quo? Such an assessment requires a benchmark of target programs with well-identified, realistic bugs. To ease the construction of such a benchmark, this paper presents FixReverter, a tool that automatically injects realistic bugs in a program. FixReverter takes as input a bugfix pattern which contains both code syntax and semantic conditions. Any code site that matches the specified syntax is undone if the semantic conditions are satisfied, as checked by static analysis, thus (re)introducing a likely bug. This paper focuses on three bugfix patterns, which we call conditional-abort, conditional-execute, and conditional-assign, based on a study of fixes in a corpus of Common Vulnerabilities and Exposures (CVEs). Using FixReverter we have built RevBugBench, which consists of 10 programs into which we have injected nearly 8,000 bugs; the programs are taken from FuzzBench and Binutils, and represent common targets of fuzzing evaluations. We have integrated RevBugBench into the FuzzBench service, and used it to evaluate five fuzzers. Fuzzing performance varies by fuzzer and program, as desired/expected. Overall, 219 unique bugs were reported, 19% of which were detected by just one fuzzer.

.pdf ]

@inproceedings{zhang22fixreverter,
  title = {FixReverter: A Realistic Bug Injection Methodology for Benchmarking Fuzz Testing},
  author = {Zenong Zhang and Zach Patterson and Michael Hicks and Shiyi Wei},
  booktitle = {Proceedings of the USENIX Security Symposium (USENIX SEC)},
  month = aug,
  year = 2022,
  note = {\textbf{Distinguished Paper}}
}

This file was generated by bibtex2html 1.99.