Typical security contests focus on breaking or mitigating the impact of buggy systems. We present the Build-it, Break-it, Fix-it (BIBIFI) contest, which aims to assess the ability to securely build software, not just break it. In BIBIFI, teams build specified software with the goal of maximizing correctness, performance, and security. The latter is tested when teams attempt to break other teams' submissions. Winners are chosen from among the best builders and the best breakers. BIBIFI was designed to be open-ended---teams can use any language, tool, process, etc. that they like. As such, contest outcomes shed light on factors that correlate with successfully building secure software and breaking insecure software. We ran three contests involving a total of 156 teams and three different programming problems. Quantitative analysis from these contests found that the most efficient build-it submissions used C/C++, but submissions coded in a statically-type safe language were 11× less likely to have a security flaw than C/C++ submissions. Break-it teams that were also successful build-it teams were significantly better at finding security bugs.
[ .pdf ]
@article{parker20bibifi, author = {James Parker and Michael Hicks and Andrew Ruef and Michelle L. Mazurek and Dave Levin and Daniel Votipka and Piotr Mardziel and Kelsey R. Fulton}, title = {Build It, Break It, Fix It: Contesting Secure Development}, journal = {{ACM} Transactions on Privacy and Security (TOPS)}, volume = {23}, number = {2}, articleno = {Article 10}, numpages = {36}, year = {2020}, month = apr }
This file was generated by bibtex2html 1.99.