‘* papers’ Category Archives

27
Sep

Better Productivity through Avoidance

by Tony in * commentary, * papers

I also found an interesting recent article by Barry Boehm on “Managing Software Productivity and Reuse” [pdf], that details the results of an extensive analysis conducted with the DOD to discover savings over a business-as-normal approach.

In this study, they discovered that you could achieve an 8% improvement through “working harder”, a 17% improvement through “working smarter”, and a 47% improvement through “work avoidance”.

Better still, all three are mostly complementary, and the gains can by accumulated by avoiding whatever work is possible and working smarter and harder on the rest.

He also provides a useful graph of how productivity has risen over the last 40 years, through the use of assembler, high-level languages, databases, regression testing, prototyping, 4GLs, sub-second time sharing, small-scale reuse, OOP and large scale reuse, providing an order-of-magnitude increase in productivity, on the general scale, every 20 years.

He also argues that with “stronger technical foundations for software architectures and component composition; more change-adaptive components, connectors, and architectures; creating more effective reuse incentive structures; domain engineering and business case analysis techniques; better techniques for dealing with COTS integration; and creating appropriate mechanisms for dealing with liability issues, intellectual property rights, and software artifacts as capital assets” even greater gains can be achieved.

27
Sep

Assessing the Impact of Reuse on Software Quality and Productivity

by Tony in * commentary, * papers

I’ve been trying to find more information on the “factor of 10” productivity differences, between either teams or individuals, frequently cited, but most of the primary articles don’t seem to be available on-line. A trip to the library is probably in order again early next week.

I did come across this study from 1995, however, which set out to measure the impact of reuse in OO systems. A graduate class was divided into teams, each of which was set the same programming task: to develop a system for a video rental store. Generic and domain specific libraries were made available for reuse, but they were free to choose whether or not to use these.

So far so good. Where the study seems to go bizarre, however, is in how productivity was actually measured: a team’s productivity was taken as “lines of code delivered” divided by “hours spent on analyzing, designing, implementing and repairing the system”. The authors point out that other measures than LOC “could have been used, but this one fulfilled our requirements and could be collected easily. More importantly, we are looking at the relative size of systems addressing similar requirements and, therefore, of similar functionality.”

This seems most bizarre. If the systems are all the same, why does the lines of code produced matter in the slightest? Surely all that matters is the time taken to produce the system – especially as this is meant to be testing re-use. If one team could reuse sufficient quantity of code to enable them to write the system in 10,000 lines, taking 100 hours (productivity = 100), but another team wrote an entire 250,000 LOC system from scratch taking 1250 hours (productivity = 200), is the second team really twice as productive?

But the paper seems even stranger than that. It counts the reused code within the total LOC for the team, thus distorting the productivity of a team who pull in a 10,000 line library that provides more functionality than they actually need (compare a team writing a 1,000 line subset of this in 40 hours, with another team who only need to write 10 extra lines to use this library, but who then have an extra 10,000 lines in their final total)

Using this methodology, the paper manages to show a productivity difference of 8.74 between the top and bottom teams, with a factor of 4.8 in LOC submitted. However, if you work back from their figures to calculate the actual lines of code written by the team (as opposed to being in their completed system), there only ends up being a factor of 1.77 in the LOC, and 2.6 in the time taken: still significant, but hardly as impressive:

Project LOC Delivered Reused Productivity Reuse rate LOC Written Time Spent
1 24698 16776 159.34 67.92% 7922 155
2 5105 113 18.23 2.21% 4992 280
3 11687 3061 32.01 26.19% 8626 365
4 10390 1545 34.3 14.87% 8845 303
5 8173 3273 51.4 40.05% 4900 159
6 8216 3099 31.12 37.72% 5117 264
7 9736 4206 69.54 43.20% 5530 140
8 5255 0 19.9 0.00% 5255 264

(italicised columns extrapolated from published results).

It’s also notable that the fastest/slowest teams in question are entirely different with each approach, and the “outlying” teams which deserve special explanation in the paper fare considerably differently.

Team 6 which seems to have a low productivity in the original paper, “considering its reuse rate”, is explained in terms of the team providing a particularly sophisticated “gold-plated” GUI. By solely measuring time taken to do the task, however, this team is one of the fastest.

Gotta go find those other papers…

29
Jul

Flexible Service Capacity – Optimal Investment and the Impact of Demand Correlation

by Tony in * papers

Also in the Mar/Apr issue of Operations Research is an article on modelling demand for services when upgrades are available. Many of the service industries have policies in place to cope with excess demand – upgrades to a higher model of rental car, or to business class seats on an airplane, for instance. Although these policies are well documented in most of the organisations, they’re apparently poorly monitored for purposes other than internal policing. The authors of this paper posit that “significant benefits can be gained if the possibility of service substitution is accounted for at the time of capacity planning, rather than only at the time of service delivery.”

29
Jul

Survival Analysis Methods for Personal Loan Data

by Tony in * papers

The Mar/Apr issue of Operations Research has an interesting article on credit scoring models. Traditionally credit scoring has been about minimising the risk of making loans to customers likely to default on the loan. But over the last few years people have started to realise that this probably isn’t enough. In fact, even a customer who has a high likelihoold of default could actually be profitable as long as the default comes far enough in the future for the interest payments made before that point to exceed the losses caused by the default.

The real issue for most lenders now is the risk of the customer paying off their loan too early for the lender to make any money. This often happens with the borrower switching to a competitor – a practice that has become particularly rife in the Credit Card industry – more and more customers are moving their balance to a new card with a super-low introductory rate (often 0%), and then moving to a competitor when the introduction period runs out. Similar things are also happening more and more in the mortage industry, and increasingly so in “normal” personal loans.

Traditional tables of “likelihood of default” provide average figures based on a variety of characteristics of the borrower tabulated against the purpose of the loan, that can be used to decide how risky the loan would be. Unsurprisingly, the likelihood of switching in each of these cases is very different from the likelihood of defaulting.

I’m curious now how long it’s going to be before information like this starts to be taken into account on people’s credit reports, or asked for on applications. Most credit card applications ask how many other credit cards you have, but I don’t think I’ve seen any yet that ask how many you have had…

24
Jun

An Experiment in Software Prototyping Productivity

by Tony in * papers

A wonderful study of a government project that was prototyped in a variety of languages (Ada, Haskell, Lisp, C++, Awk etc.)

Unlike several other such studies each was written by a senior programmer in that language – not all by the same developer. Although the results are impressive in and of themselves (Rapide took 54 hours to develop, Lisp, 3 hours; C++ took 1105 LOC, Haskell 85 lines), it’s more interesting the response to the Haskell solution, which was written, Literate Programming style, as executable LaTeX. Because the code was so small, but the documentation so large (85% of the solution was documentation), the reviewers assumed that this was not a complete, tested, executable program, but just a specification with some top level design!

[Thanks to Malcolm Wallace for the link.]