Formally Defining The Opacity Factor

If we are to keep a system piecewise comprehensible, we need a way of mechanically measuring the psychological complexity of each module as it is compiled, so that we can issue compile errors for modules which exceed the Holbrook Limit.

This means that we need an explicit, formal definition of psychological complexity. The human mind being a very poorly understood mechanism, any such measure must at this point be at best a rough approximation, but any approximation which gets us within factor of two or so of the "true value", whatever that may be, should suffice for practical software engineering purposes.

Let us define an opacity factor which ranges linearly from 0.0 completely trivial modules to 1.0 for modules at the Holbrook Limit.

Having an opacity factor in excess of 1.0 can then ideally be regarded as a compilation error. At the least, the average opacity for a system should provide a useful measure of its human comprehensibility, and the peak opacity values should be useful indicators of modules which are candidates for restructuring to improve comprehensibility.

Let us take:

                      lines_of_code(body(module)) + lines_of_code(imported_interfaces(module))
    opacity(module) = ------------------------------------------------------------------------ 
                                                10,000 lines of code

This makes the opacity a nice dimensionless number.

We may then take the effective lines of code of a complete software system to be simply the sum of the module sizes weighted by opacity:

    effective_lines_of_code(set M of modules m) =
        sum over m: 
            lines_of_code(body(m)) * opacity(m)

The contribution of each module is proportional to

b*b + b*i
where b is the number of lines of code in the module body and i the number of lines of code in imported interfaces.

The first term, scaling as the square of module size, penalizes overlarge modules.

The second term, scaling with both module and imported interface size, penalizes modules which import excessive amounts of external context.

Both are logically required if our measure is not to reward pathological codebases, so we may reasonably presume that no simpler approximation will satify our design goals. (Practical experience may of course show that a more complex approximation is needed or appropriate.)

Discussion

One minor problem: As usual, one doesn't want to penalize comments or blank lines, so the above lines_of_code() function should be taken as ignoring them.

The biggest problem with this definition is, I believe, that the number of lines of code in an imported interface is a poor measure of its true psychological complexity.

What is likely to matter most in practice is the complexity of the invariants maintained by the module, of the preconditions required by the module, and of the postconditions guaranteed by the module.

The day may well come when we routinely specify such information in our code. If so, we can then improve the above approximation by taking advantage of such information.

For now, the given approximation has the considerable practical advantage of actually working on existing codebases written using existing programming practice -- a good approximation which works beats a great approximation which doesn't!



Back to Comprehensible Computing.


Cynbe ru Taren
Last modified: Wed Feb 16 13:12:45 CST 2005