The Dickey paper helpfully reproduces a table of the key data from the Sackman experiment. (I haven’t been able to find the original version of the Sackman paper yet, so I haven’t been able to verify the data, but I’ll assume for now that IEEE Transactions verified it when they published Dickey’s paper!)
I’ve eliminated the developers who produced their solutions in machine code, the one developer who had no prior experience of time-sharing, and the developer whose first experience of JTS was this test, leaving a result sample of 7 developers. I’ve also combined the time taken to code the solution, with the time taken to debug it. The average debug time for the on-line vs. the off-line group for the more difficult test (Algebra) was 29 hours vs 28 hours, so I’m chosing not to further subdivide according to platform.
The results are quite illuminating:
In each case the distance between the worst time and the median is approx 2:1. From the median to the best is just over 2:1 for Algebra, and just over 3:1 for Maze: the “superprogrammers” don’t seem that better any more.
Even more notable is the performance of Programmer 1. Although he is the fastest at solving the Algebra task, he is one of the worst at the Maze task (this was due to a much higher time spent in development of the Maze solution than all the other programmers, so the issue of on-line vs off-line debugging seems not to be relevant here either).
When we take the total time spent on the two tasks combined the picture is even more surprising:
Now we have factor of 1:2.25 from median to worst, and simply 1.8:1 from best to median.
In case all these numbers have made your eyes glaze over, I’ll restate it: this is the test that is often cited as showing a productivity variance of 28:1!