While trying to weekly report the quality of our software, We had a lot of discussion about what numbers to use. None of the quality calculations really represent reality. Actually, I believe it's impossible the calculate code quality, because it needs human interpretation. We have two pillars in our weekly quality report, the first is a human pillar, the second is a calculated one. The human factor we use is if code reviews were done. In my opinion this is the most important one.
The second pillar is the calculated one. The information we have, at this time, is code coverage. Because we work with a lot of "pre-unittest" code the developers weren't actually happy with the use of code coverage. This is the way we now use to represent the change in code over the last week and the effect the changes had on code coverage (a colleague came up with the calculation).
Difference in coverage (percentage) over the last week = ((Ct - Cl) / T) - ((Nt - Nl) / T)
These are the variables:
Ct = Covered code blocks this week
Cl = Non-covered code blocks this week
Nt = Covered code blocks this week
Nl = Non-covered code blocks this week
T = |(Ct - Cl) + (Nt - Nl)| = The total difference in code blocks
Translated the percentage is the percentage of growth of covered blocks minus the percentage of growth of non-covered blocks. If this is a positive percentage there was a positive effect on the code coverage. If the percentage should be above some kind of norm. We chose above 75% = good and below -75% is bad.
Another colleague suggested one of the following metrics:
The last one isn't so much about quality, but more about risk. But the first one probably is usable. Probably we have to study some more on a usable metric and how to use it in our daily builds. If we have a better metric, I'll let you know. If you have any suggestions, please, let me know!