There is currently little evidence about what tools, methods, processes, and languages lead to secure software. We present the experimental design of the Build it Break it secure programming contest as an aim to provide such evidence. The contest also provides education value to participants where they gain experience developing programs in an adversarial settings. We show preliminary results from previous runs of the contest that demonstrate the contest works as designed, and provides the data desired. We are in the process of scaling the contest to collect larger data sets with the goal of making statistically significant correlations between various factors of development and software security.
[ .pdf ]
@inproceedings{ruef15bibifi, title = {Build It Break It: Measuring and Comparing Development Security}, author = {Andrew Ruef and Michael Hicks and James Parker and Dave Levin and Atif Memon and Jandelyn Plane and Piotr Mardziel}, booktitle = {Proceedings of the USENIX Workshop on Cyber Security Instrumentation and Test (CSET)}, month = aug, year = 2015 }
This file was generated by bibtex2html 1.99.