The Size/Complexity Dynamic

Sep 2nd, 2002 by Tony in * commentary, QSM1

Human brain capacity is more or less fixed, but software complexity grows at least as fast as the square of the size of the project

As we try to solve bigger and bigger problems, the Size/Complexity Dynamic drives us from a previously successful pattern to a new, untried pattern. Were it not for our insatiable ambition, we could rest comfortably with our present pattern until they carried us away in our rocking chairs. Many organizations have done just that, usually because their customers have no further ambitions for better software.

When there is ambition for more value in software, it quickly pushes against the barrier of the Size/Complexity Dynamic unless we can alter the capacity of our brain. Organizations sometimes do this by hiring smarter people, but there’s a definite limit to that tactic.

We can’t alter the capacity of our brain, but we can alter how much of that capacity we use, as well as what we use it for. That’s why software engineering was invented.

— Jerry Weinberg, Quality Software Management Vol 1, Chapter 9

In a nice twist of synchronicity, I was reading some old issues of JOOP this morning, and came across an interesting article from October 2000, by Nevin Pratt, entitled The Software Wall. In it he describes his two constants of software:

The First Constant of Software: Software is constantly becoming more complex.

The Second Constant of Software: A programmer’s ability to manage the complexity remains constant.

Pratt then explains that: these two constants, so long as they remain true, predict the existence of a developmental barrier or wall, beyond which we cannot progress. The reason that we cannot progress beyond “the wall” is because as complexity increases, productivity decreases. Productivity thus approaches zero as the complexity increases, until eventually productivity hits zero.

We can’t just add more people (cf. Mythical Man Month), so we need to constantly attempt to move the wall. Pratt shows how we’ve managed that by the move from machine language to assembly language, and then to 3GLs, and then to SQL, and then to OO (and stakes his claim that dynamically typed OO languages are the next in this series, thus the URL of the reprint of the article!)

However, Pratt claims that these are attacking the first constant – the complexity of the software, which I think gets it backwards. Each of these is really just another layer of abstraction – the software doesn’t become any less complex – the detail just gets hidden, meaning that the programmer doesn’t need to encounter as much of the complexity, thus minimising the impact of the second constant.

However, it’s an interesting partitioning of the problem, and I like the approach of asking, for any given suggestion, will this make the software less complex, or enable the programmer to handle more complexity?

No Comments