Feedback-directed random test generation

Download: PDF, implementation.

“Feedback-directed random test generation” by Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, and Thomas Ball. In ICSE '07: Proceedings of the 29th International Conference on Software Engineering, (Minneapolis, MN, USA), 2007.
A previous version appeared as Microsoft Research technical report MSR-TR-2006-125, (Redmond, WA), Sep. 2006.

Abstract

We present a technique that improves random test generation by incorporating feedback obtained from executing test inputs as they are created. Our technique builds inputs incrementally by randomly selecting a method call to apply and finding arguments from among previously-constructed inputs. As soon as an input is built, it is executed and checked against a set of contracts and filters. The result of the execution determines whether the input is redundant, illegal, contract-violating, or useful for generating more inputs. The technique outputs a test suite consisting of unit tests for the classes under test. Passing tests can be used to ensure that code contracts are preserved across program changes; failing tests (that violate one or more contract) point to potential errors that should be corrected. When applied to 14 widely-used libraries comprising 780KLOC, feedback-directed random test generation finds many serious, previously-unknown errors. Compared with both systematic test generation and undirected random test generation, feedback-directed random test generation finds more errors, finds more severe errors, and produces fewer redundant tests.

Download: PDF, implementation.

BibTeX entry:

@inproceedings{PachecoLET2007,
   author = {Carlos Pacheco and Shuvendu K. Lahiri and Michael D. Ernst
	and Thomas Ball},
   title = {Feedback-directed random test generation},
   booktitle = {ICSE '07: Proceedings of the 29th International Conference
	on Software Engineering},
   publisher = {IEEE Computer Society},
   address = {Minneapolis, MN, USA},
   year = {2007}
}