As a software architect I’m constantly exposed to people quoting high code coverage metrics as a measure of test quality. Eh, no… A code coverage metric is not a measure of test quality! Code coverage provides an ability to determine what parts of your application are not tested. It has no value at all in determining how well tested your application is. After all, it’s not particularly difficult to obtain high code coverage metrics without providing any degree of testing.
Thus the paradox is, a lower code coverage metric is actually more valuable than a high one; as it provides more guidance in what needs to be tested. In fact I would argue, it’s hard to imagine a code coverage value of over 70% to be of any value whatsoever. Code coverage only measures the degree of your application exercised within the context of a unit test, not whether anything is actually being tested. Some people would argue that just exercising code within the context of a unit test is testing the code. It isn’t. But it is a mechanism that’s frequently used as a scheme to get high code coverage values.
At present, there is no automated mechanism for determining unit test quality. I suspect a mechanism rather like the code quality metrics such as cyclomatic complexity etc. will be the eventually obtain some degree of measurement of automated test quality. Right now, there’s nothing. The whole area of automated testing is constantly mutating with techniques like IoC, mock objects and parameterized tests surfacing every few years to revise what’s considered best practice.
In short, if someone quotes a figure of 100% for code coverage for a non-trivial application one might be forgiven for thinking he’s either a liar or an idiot and quite possibly both.