The successful analysis technique model checking can be employed as a test-case generation technique to generate tests from formal models. When using a model checker for test case generation, we leverage the witness (or counter-example) generation capability of model-checkers for constructing test cases. Test criteria are expressed as temporal properties and the witness traces generated for these properties are instantiated to create complete test sequences, satisfying the criteria. In this report we describe an experiment where we investigate the fault finding capability of test suites generated to provide three specification coverage metrics proposed in the literature (state, transition, and decision coverage). Our findings indicate that although the coverage may seem reasonable to measure the adequacy of a test suite, they are unsuitable when used to generate test suites. In short, the generated test sequences technically provide adequate coverage, but do so in a way that tests only a small portion of the formal model. We conclude that automated testing techniques must be pursued with great caution and that new coverage criteria targeting formal specifications are needed.
|Original language||English (US)|
|Number of pages||9|
|Journal||Proceedings of IEEE International Symposium on High Assurance Systems Engineering|
|State||Published - Jun 22 2004|
|Event||Proceedings - Eighth IEEE International Symposium on High Assurance Systems Engineering - Tampa, FL, United States|
Duration: Mar 25 2004 → Mar 26 2004