Substantiating Programmer Variability
In the same issue as the Dickey paper there was another small follow-up article by Bill Curtis attempting to put forward other data in support of the high degree of variability, in light of the problems with the data from Sackman.
The approach this time was simpler, although still aimed at debugging: 27 programmers were given a modular-sized Fortran program with a simple bug, and the time taken to find it was measured. (There were actually two such experiments, but the first was deemed too difficult). The times taken were then grouped and tabulated:
|Mins to Find||# of People|
(one programmer could not find the bug at all, giving up after 67 minutes)
Although there is again a factor of 20+:1 between best and worst, Curtis points out this relies on having both a brilliant programmer and a horrid one in the same sample, and that this is thus
not a particularly sustainable measure of performance variability among programmers.
In addition he points out that:
Substantial variation in programmer performance can be attributed to individual differences in experience, motivation, intelligence etc. Thus, important productivity gains could be realized through improved programmer selection, development, and training techniques. These gains would be achieved through eliminating the skewed tails often observed in distributions of programmer performance data.
As with the original Sackman conclusion, the emphasis here is on emoving the weaker programmers (although potentially by training, rather than not hiring – Curtis points out that the programmer who failed to find the bug at all substantially improved in later trials when he had gained more programming experience), not on attempting to find the brilliant ones.