Sunday, February 7, 2010

Software Testing Techniques, an Empirical Approach

Proper software testing regimes are a cornerstone of effective software engineering. Progress has been made to teach students sound testing techniques, but improvement can still be made. My master's research supervisor and I conducted a study designed to empirically determine the difference in ability between student and professional software testers, and elicit from the experts behaviours or techniques which may be used to enhance undergraduate curriculum.

Our experimental setup consisted of in-lab observational sessions where subjects wrote thorough suites of JUnit tests for sample software we'd created. Subjects were drawn from the University of Toronto’s undergraduate computer science student body and professional developers from the Greater Toronto Area. The test code and video logs created during these sessions were examined for trends present in the student and professional groups.

Our intuition going into the study was that professionals would find more defects with their test suites, an advantage stemming from some metric such as number of tests written, lines per test, code coverage per test, etc. Analysis of these metrics did not confirm these hypotheses, however. Students and professionals performed equally well in terms of number of bugs found. However, student code contained more defects and, more importantly, the types of bugs found differed strikingly between the two groups.

Bugs in the sample code were broken down into two categories: stateless and stateful. A stateless bug is uncovered by inputting invalid values into a method invocation, and the method returns invalid results or throws an exception. A stateful bug occurs when a method call corrupts the object's state, and so subsequent calls perform incorrectly. Students found a mix of stateless and stateful bugs in the code, with a strong majority being stateless. The professionals sampled found strictly stateful defects. There are several possible explanations for this effect, although no evidence to support one over the others is immediately apparent.

The full text can be found here.

3 comments:

Unknown said...
This comment has been removed by a blog administrator.
Anonymous said...

Nicely presented and documented, thank you.

I found myself bracing in anticipation at your hypotheses; I recognised them all as assumptions I hold, so I became anxious that the experiment would not support them :-)

It is sobering to see one's assumptions shown to be irrelevant :-/

Kitchen Sinks said...

Thanks for sharing this information....it helps more to get software testing....good..keep it up