Measuring Programming Quality and Productivity

Oct 1st, 2002 by Tony in * commentary, * papers

In the field of computer programming, the lack of precise and unambiguous units of measure for quality and productivity has been a source of considerable concern to programming managers throughout the industry.

This paper, from the IBM Systems Journal in 1978, is one of the earliest by Capers Jones on Software Productivity, but it still seems that little has changed from this assessment in the last 25 years.

Jones discusses the problems with the two most common units of measurements used in IBM at the time: Lines of Code per Programmer Month, and Cost per Defect, showing that these measures can slow down the acceptance of new methods because the methods may – when measured – give the incorrect impression of being less effective than former techniques, even though the older approaches actually were more expensive.

In the last 25 years, Jones has devoted much of his time to studying productivity, and promoting a vast array of metrics, but in this paper he promotes one key coarse measure: “Cumulative defect removal efficiency”, which is the defects found before release divided by the defects found after release. Plotting this against “total defects per KLOC” then produces a useful graph of the “maintenance potential” of a program.

More interestingly, this seems to have been one of the first papers to note that productivity rates decline as the size of the program increases. Jones details that programs of less than 2 KLOC usually take about 1 programmer month/KLOC, whereas programs of over 512 KLOC take 10 programmer months/KLOC. Similarly, when it comes to maintenance, smaller changes imply larger unit costs, because it is necessary to understand the base program even to add or modify a single line of code, and “the overhead of the learning curve exerts an enormous leverage on small changes”.

No Comments